var/home/core/zuul-output/0000755000175000017500000000000015152753631014535 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015152771411015475 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000334163215152771177020277 0ustar corecoreikubelet.log_o[;r)Br'o -n(!9%CMc;b[>Ǧ(\XGf͙t&|,mvW?eGbuq77+%f?N_ P??xI[+mEy},.fۮWge7Nwl~1;Zxs^~)32]$˛j{Zwg馾j?&~?|XJXlN__uαHJ2E$(Ͽ7W|'++*z>v5gCh31 )Kh3i J1hG{aD4iӌçN/e] o;ijiBx_2dd$YLYG(#?%U? ` 17ׅwڋًM)$_FiqwGtWL,u0V9c  Tt2H'b*t--ovw:Z8HU=y=o&|/'oZOSL2uQO)rat m2m`QɢJ[a|$ᑨj:D+_ʎ; 9Gac/m_jY-i`)͐noNGWo(C U ?}aJ+do&?>Y;ufޕ+D`7Pa]Xj0ćNbYe獸]fNdƭwywOw0rjɻ,]LF0);I$>ga5"f[B[fhToɾgZ)~5ɑUIU"$`SFKa"j[Hp'{fȼ-vE,4IRkL!~kn0ߐNPJ|U ]]=UD m}}O-%UNnOA~HXwhO@GڷMVw dOox^-:}KA8玛7C;XHK:lL4Aْ .zqHP"P.dTrcD Yjz_aLm.x':R€A`*t) /]/* @εyhi,cS4 6".ܥ0~fgsԻ ;]-Oľ9v%T&hoP~(*טj-dߛ_Q?[kLd. yK>"dg{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjOh}nL;R:7A}Ss8 N'ԗڲ7J9@ kV%'DG.b.~,%6~;GqE,[pJ82D:BCtka7v Ө⸇~AE6xdv?Dr/0;|!B`0p1y6PM3rr1TZ')*R ,k4΢2KkBxjWNK0[EΰPaySw)} hP(d#iI@YUXPKL:3LVY~,bW;W8QufiŒSq3<uqMQhiae̱F+,~E/n3 09WAu@>4Cr+O\])fǶy{0$S7:z4efb#hQ #_ފH&z!HAd |}p TRi*KsmM+1 P0W YW ].PK%Mj˫-Kp`zbbq$7&&{Ldrǒ*!;[9@M:C{Sۈٟ%Lԏ6Ӿ S]-.&:,,$.Ɏ\`UXSiZj 6&:$)8щc.֐wE^lKKiw ڍ[I?TPht~_ ^?n3$ƍ7ͱ?9].ۿ뺶ypy͟מs{(99x9O6]tGLS0l/LOKcQ.os2% t)Eh~2p cL1%'4-1þh[;:>OM=y)֖[Sm5+_'cjf `~ߛUIȏvl.4`P{h056 9wo ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !Q[;4j39]WiZSس:$37}o$[4xedCĥ6uOڀI dFF rF,:XXlw{$UYwS1dӧl 5Yp$'mZv"ꒄℬT ٪ȿ%j\WFI#R޸B4vOL-LIP E&4x0<]pK>UKkZ{qqio :íyFR1u)X9 fNU ~5׳batx|ELU:T'TtݭRj^-%[ R'l}jdX*kj1H`z8F5]We߷}J0TTƩ0RxSe=>/ ђ(9Uq EmFjq1bX]DןR24d 3[n )ܗKj/jUSsȕD $([LH%زDϓ{BpY]Q4`Iz_*2coT'ƟlQ.Ff!bpRw@\6"yr+i37Z_j*YLfnYJ~Z~okJʯrΒz+ϟMn?!ԫ5H&=JkܓhkB\LQ"<LxeLo4l_m24^3.{nɼʪ~75/nQοs d|pxu\uw?=QR -Mݞίk@Pc n1æ*m$=4Dbs+J \EƄզ}@۶(꧟/ۼ𹫘qݎt7Ym݃|M$ 6.x5 TMXbXj-P\jА޴y$j`ROA"?Yg"ź)\3mf|ܔMUiU|Ym! #'ukMmQ9Blm]TO1ba.XW x6ܠ9[v35H;-]Um4mMrW-k#~fؤϋu_j*^Wj^qM `-Pc.@%=X#|ۡb1lKcj$E^%nu;v}?@Nkj)n'^52&I pѴOw4ǫJ5H 7B`j:E]`C 8蟫n'Ą6[_  'Z! ,Z.maO_Bk/m~-Qy2$?T3ͤE^긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {uVUKe$,\ܺI `Qز@UӬ@B {~}Qg?lvחzäTC 4zv)|Vy7߯@qC cN ͯ~1-b }kAn=)m 3fo˶_ XJNC5B~m3Kx6BDhvxZn8hSlz z6^Q1* _> 8A@>!a:dC<mWu[7-D[9)/*˸PP!j-7BtK|VXnT&eZc~=31mס̈'K^r,w˲vtt|,S=[qɑ)6&vד4G&%JLi[? 1A ۥ͟յt9 ",@9 P==s 0py(nWDwpɡ`i?E1Q!:5*6@q\\YWTk sspww0SZ2, uvao=\Sl Uݚu@$Pup՗з҃TXscwqRtYڢLhw KO5C\-&-qQ4Mv8pS俺kCߤ`ZnC@{GP 9::3(6e™QG7'Dff^f!8:/p6>TV*P,rq<-mOK[[ߢm=ȑt^, tJbظ&Pg%㢒\QS܁vk$}  L&T+̔6vmEl 05 D"w|=U D(C{oVa*H7MQK"<O%MTTtx袥:2JޚݶKd7UQJͮݔ\Zťz;sh4BΈ l8f(q*72"DB&&-TeD1ZrbkI%8z}ݛwu0{ѩ2ْM4tޖӫgHKT~~[= LfZ eWzRSrkICd ûQÝBsN&4KG&ƫEJި_1N`Ac2 GP)"nD&D #-aGoz%<ѡh (jF9L`fMN]eʮ"3_qOZ釋rTG_7:0@Iuʙ?&Ԕ8e,žLG"1lͧQѶGM]}yxZl 0JM"d.=`Yƚ^"J?}>8ϵq\FOXƀf qbTLhlw?8p@/u7ި!:E#s:ic.XC^wT/]n2'>^&pnapckL>2QQWo/ݳ<̍8)r`F!Woc0Xq]P)\wEZ(VҠQBT^e^0F;)CtT+{`Bh"% !.bBQPnT4ƈRa[F=3}+BVE~8R{3,>0|:,5j358W]>!Q1"6oT[ҟT;725Xa+wqlR)<#!9!籈K*:!@NI^S"H=ofLx _lp ꖚӜ3C 4dM @x>ۙZh _uoֺip&1ڙʪ4\RF_04H8@>>fXmpLJ5jRS}D ?U4x[c) ,`̔Dvckk5Ťã0le۞]o~oW>(91ݧ$uxp/Cq6Un9%ZxðvGL qG $ X:w06 E=oWlzN7st˪C:?*|kިfc]| &ب^[%F%LI<0(씖;4A\`TQ.b0NH;ݹ/n -3!: _Jq#Bh^4p|-G7|ڸ=Bx)kGU/m >6JXa5FA@ q}4BooRe&#c5t'B6Ni/~?aX9QR5'%9hb,dsPn2Y??N M<0YaXJ)?ѧ| ;&kEYhjo?BOy)O˧?G6$g!D$c=5ۄX[ു RzG:ߺ[ӏ[3frl ô ހ^2TӘUAT!94[[m۾\T)W> lv+ H\FpG)*־%NǸ*kb05 V8[l?W]^@G:{N-i bɵFWǙ*+Ss*iނL8]iSCQ&s~In/SZ % 'I Ƿ$M6rN+LxE>^DݮEڬTk1+trǴ5RHİ{qJ\}X` >+%ni3+(0m8HЭ*zAep!*)jxG:Up~gfu#x~ .2ןGRLIۘT==!TlN3ӆv%#oV}N~ˊc,_,=COU C],Ϣa!L}sy}u\0U'&2ihbvz=.ӟk ez\ƚO; -%M>AzzGvݑT58ry\wW|~3Ԟ_f&OC"msht: rF<SYi&It1!ʐDN q$0Y&Hv]9Zq=N1/u&%].]y#z18m@n1YHR=53hHT( Q(e@-#!'^AK$wTg1!H$|HBTf̋ Y@Mwq[Fī h[W,Ê=j8&d ԋU.I{7O=%iG|xqBչ̋@1+^.r%V12, _&/j"2@+ wm 4\xNtˆ;1ditQyc,m+-!sFɸv'IJ-tH{ "KFnLRH+H6Er$igsϦ>QKwҰ]Mfj8dqV+"/fC Q`B 6כy^SL[bJgW^;zA6hrH#< 1= F8) 򃟤,ŏd7>WKĉ~b2KQdk6՛tgYͼ#$eooԦ=#&d.09DHN>AK|s:.HDŽ">#%zNEt"tLvfkB|rN`)81 &ӭsēj\4iO,H̎<ߥ諵z/f]v2 0t[U;;+8&b=zwɓJ``FiQg9XʐoHKFϗ;gQZg܉?^_ XC.l.;oX]}:>3K0R|WD\hnZm֏op};ԫ^(fL}0/E>ƥN7OQ.8[ʔh,Rt:p<0-ʁקiߟt[A3)i>3Z i򩸉*ΏlA" &:1;O]-wgϊ)hn&i'v"/ͤqr@8!̴G~7u5/>HB)iYBAXKL =Z@ >lN%hwiiUsIA8Y&=*2 5I bHb3Lh!ޒh7YJt*CyJÄFKKùMt}.l^]El>NK|//f&!B {&g\,}F)L b߀My6Õw7[{Gqzfz3_X !xJ8T<2!)^_ďǂ.\-d)Kl1헐Z1WMʜ5$)M1Lʳsw5ǫR^v|t$VȖA+Lܑ,҂+sM/ѭy)_ÕNvc*@k]ן;trȫpeoxӻo_nfz6ؘҊ?b*bj^Tc?m%3-$h`EbDC;.j0X1dR? ^}Ծե4NI ܓR{Omu/~+^K9>lIxpI"wS S 'MV+Z:H2d,P4J8 L72?og1>b$]ObsKx̊y`bE&>XYs䀚EƂ@K?n>lhTm' nܡvO+0fqf٠r,$/Zt-1-dė}2Or@3?]^ʧM <mBɃkQ }^an.Fg86}I h5&XӘ8,>b _ z>9!Z>gUŞ}xTL̵ F8ՅX/!gqwߑZȖF 3U>gCCY Hsc`% s8,A_R$קQM17h\EL#w@>omJ/ŵ_iݼGw eIJipFrO{uqy/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|R 2ӗh$cdk?vy̦7]Ạ8ph?z]W_MqKJ> QA^"nYG0_8`N 7{Puٽ/}3ymGqF8RŔ.MMWrO»HzC7ݴLLƓxxi2mW4*@`tF)Ċ+@@te\0zE|!@E " ;9Ώf3kZc7B 8yݪkIf-8>V#ہll/ؽnA(ȱbAj>C9O n6HNe">0]8@*0)QsUN8t^N+mXU q2EDö0^R) hCt{d}ܜFnԴ.2w⠪R/r| w,?VMqܙ7;qpUۚ5Tnj ۝jlN$q:w$U>tL)NC*<` `)ĉJآS2 z]gQ)Bی:D`W&jDk\7XD&?Y\9ȢG:${1`+i n8=%Ml%İȖb7AޗuV3A7ำqE*\qb'YpuHƩҬV nm=Ɂ-2=|5ʹ zi ' ׹U>8bK0%V\ t!Lku`+]c0h&)IVC)p| QUA:]XL/2La[Xѓ F;/-rtx-rei0hE˝ݸDt#{I} `v;jUvK S x1Q2XU&6k&lE"} Q\E)+u>.,SzbQ!g:l0r5aI`"Ǒm O\B!,ZDbjKM%q%Em(>Hm 2z=Eh^&hBk X%t>g:Y #)#vǷOV't d1 =_SEp+%L1OUaY쎹aZNnDZ6fV{r&ȑ|X!|i*FJT+gj׾,$'qg%HWc\4@'@—>9V*E :lw)e6;KK{s`>3X: P/%d1ؑHͦ4;W\hx锎vgqcU!}xF^jc5?7Ua,X nʬ^Cv'A$ƝKA`d;_/EZ~'*"ȜH*Duƽ˳bKg^raͭ̍*tPu*9bJ_ ;3It+v;3O'CX}k:U{⧘pvzz0V Y3'Dco\:^dnJF7a)AH v_§gbȩ<+S%EasUNfB7™:%GY \LXg3۾4\.?}f kj· dM[CaVۿ$XD'QǛU>UݸoRR?x^TE.1߬VwխmLaF݄",Uy%ífz,/o/Z^]ݖF\\UR7򱺹...m/~q[ /7n!7xB[)9nI [GۿsH\ow!>66}եl?|i [%۾s& Z&el-ɬeb.E)բA l1O,dE>-KjLOgeΏe|Bf".ax)֒t0E)J\8ʁ,Gulʂ+lh)6tqd!eó5d ¢ku|M"kP-&ђ5h ^pN0[|B>+q"/[ڲ&6!%<@fpѻKQ31pxFP>TU?!$VQ`Rc1wM "U8V15> =҆#xɮ}U`۸ہt=|X!~Pu(UeS@%Nb:.SZ1d!~\<}LY aBRJ@ѥuȑz.# 3tl7 ]وb Xnݔ[TN1|ttc‡-5=VrPhE0Ǐ}Wd|\aD;(;Ha.]1-{s1`HbKV$n}Z+sz'ʀ*E%N3o2c06JZW?V g>ed\)g.C]pj|4逜*@ nBID f"!!*7kS4޷V+8弔*A19`RI/Hй qPq3TY'퀜+/Ĥ'cp2\1: 0mtH,.7>\hSؗ΀ѩ آSNEYdEcaLF&"FhQ|![gIK v~,Jc%+8[dI368fp*CDrc3k.2WM:UbX[cO;R`RA]d+w!e rr솜[/V`+@;Τ`5d0ϕ_Lع`C"cK>JG.}Ε00e>& 2䯫vNj31c$ i '2Sn-51Y}rE~b>|Ď6Oj~ebIapul9| 3QtUqSCxTD7U9/nq.JYCtuc nrCtVDƖϧ;INOKx%'t+sFUJq:ǫf!NRT1D(3.8Q;І?O+JL0SU%jfˬ1lމZ|VA/.ȍȱh M-r ~[0AG꠭y*8D*-Rz_z{/S[*"꫒?`a;N6uilLn<Yllmb rY״͆jqTI!j.Pٱh s!:W_´KxA|Hk1nE6=W|$O -{]1Ak$ ѫQ6Plp;3F$RveL l5`:~@c>q,7}VE-Q8W70up˳ A¦g/OEU:غA>?=CۣPqȅlW11/$f*0@б 2Dݘrt +qrx!8 J&[V =͋A,z`S,J|L/vrʑ=}IhM4fG(Ȋ1{TT%41Oa'$ ]oHWy_Bw <Ǚ;`lJ|=")YrlbbJ;~U]BhA'xGIҘUgl(}UL6s@H6E)}U|N5}ՅG$ Q04xGEWr+mߔci쫢T2 e{k=*Fˠ)zYʺg<*jDUajOo"kV T콵`_b5y N{O b_F6?m3m-"u&& L . o~]UbC Z`<7I%hƸ{/q)0_,7~ E|;GI~2SRmlNIS7,GwƗ0s3 ^ ]X4_W"IrMÎ$~lO ^WcH;ŕ#hc"4]gB7CCrY7fڀ1o }e1y\zvbL%r~tW2,0_/Wwxh{Z1b$` y:x'+PxME&W{];vu KfyM#iZF:"פo/es}4^>ħ4' Mԣ[|( =_aju;;" #iʋt{uൺd 7 ɹooxls}i9cK7<쵫ͣ,:eׂ«X{ cX \oUym5-2ɞں{} `}c8~I7]K7HO/r1b!5lxIka E&U&2+&u%Ongd,³A>Ť(cAhZـJD2Uun*Wo (,Ckߙe Nj֊2.J双}bD6c.<ϳi02[ mݴ,qC6<3a Yf$ ߳4y3)Qv]8d(.'pvؚLrIg-Ȭ.yQ<}4|0Ky#ClׄOGdM+AyTj2 <E)Q( `2|3,F/y UY$ >9Bc>gr'PX=2a~GRfu]To't7E5yuUMUk]b*Ŭ܎|Qd)6dș<yS@Y5!楼( i YS,i HPK-w\\A&9 #Uy:DF PrOX~+xQr N__bd`3L}˔g{^x=#7ܶ40.5(8ͳ6'E K>sqg ر hDѷf1-:"m|ełlB95' :Pm(YsqEDHl>ʦ$sfa`uÓX&<A|`>odx|Gu`Sħ1jpБ. j"iO2 0O0A0/~(ߧC=˳e{G'x&0 W+=xPSi gnw@8 j^PkGIE] >H7˳4o* *(0=H(>. <%qׇNj1A)=:v-Y_D<"X<h63+"O$/QK%I }u#ъ¡Xs=McQp<8l{&E X6u'gaN[qH⪡T5 c#d-V'wXujo |u*&6;<^FDMݍ 4R Jߢ}ɫUHb0x_`Dj*Y-TFk5.~Vgw<6Œ5!gQYSmOYx9 (ԟ}鰞iy\ )W CNKM0H{. v_OiF47^'R-ޞ4{hM -+q .`E<{ʈ7I}Hoϡ߽S!Xh,-ސ:bxR{Z޿DV]cSu`TME\`&)V<wrfbֆ0*Ϟ bHYHĵD^8|l N ma!YZ֌]Uӗb!bu&BRb NcƆ j3ڀh{|}9wW c_K9vug𓼺r&r W~=֚v%6EkZR[LU4TV4x`|ߧTubP!uc6Go>~O8$ѥ"1Hu;O^&MgBCUj{I7e\AX*VF֫ _M=@`諺i*)D biw>.$"qZ[0"g|]|U=apxQ_ތÃ|K~Uuc*X"i,'=U=(]L#TܣKZOwƜu7C;ftSڳLHUU~0u\_U73 lHLZ~ݴ ۔ٖ#޾l{wz3Ph{W0wb)BYIJcr&_ӇyoW3ݭ!\|hH "| }];( 3OG4UgZ_vȋNh(Ld.!UyH%1A!Զλ.b)Z%!xG?  }NAq}hPgMvu] o۸+;ۢD=A>&MڙE Kr-Jvl琔LNNٽNH||<<$4{.|g9pww}BjBij*$hZcyCc,+¥,N?O'1FT>~̣(に8Ah>mDETT ]ǧ4ϸ(Nj06Bu{?a/0ݳ9 16Rp/ر 4(7z#>b\Ӯ=aҶ=u}mcp(bqd@i;̾ب詟ǫAy{w'3>j&B{vbw(ʆų% `hwq E}* ?KR醷>mgكCmgv3 _FɆL=Ӥ{ny(v#HP, .xJ[sǡxDՑJۙmgF. u.A] @H(4BwaP0zT#̳!#':p#(oOu.e@&k@6;H^rCy,$_^_\W9cEiŏl3V+Ȱ|yݻƳfLGn]Ϻu1ֳwWTt4/^N^@<]v(܎)jݕ~4~86ȸuz@&m?OP qJeXY7VΛpED!JM ~ԱI2-h9v/s'$%4aL$cej_瞘,>tb )!  5ޒ3ӹh[Ա/\ VZE/fn="rԺ%gi79(yAXm`* 1] /calhavdC'[r|D^q5#|S2DzhԜPY5ocVCuG G*MA;pXdK[9[ƌ؇aH>yF8>?!75lߡk:9h^u\%q!)=̔$\POF u:@\r YI( .`D))UIZkǏ'/ws%Qԋ-717zC;r{0vSA nɳ3=XҞiUiբEd74{ɐ |h&b6ZQt"O~&#ȯ'OğEd Q tT!Q74vKxX%vү3n7++X211b[9pn]ijjQvS`|R,Y2&&5 <%Z$;X,kGm M89a r<)ʓȍ%y7K 6|.œ8_.ˡ囼tRwhڶ0 1E]F\6ɻNOQދwT8N_m(-?:+nQŔ\A&Uz{3CM;oWpY7iz36ЧvCv#/NcrJЍ#%cz":YleĦ0FG=?ekEe*%O%j>m_. I!ppKT'xׇ᩻e=-zr} Xʹ #ݰE7WJ,0G `k]pFm`U0Ho(& 12W{Dُ0ll8$ O"EZ?} OpA`f,rwQ4r`#GU4q+UL~=@:Mpv "k77,{W\GI*pnwm'IŝVb|o^۶ebmzC -`\Pu,f,gy ֠`鯫hCEqҀm @^SC(mJ7 nN(ݎPDB&.M)dN&n9J7hf2'wٵZkmN)6 njo@9vO$ހP&m@(ۜP쉄 uڄ:lN u6 mn@9vO$݀PMz=PoB6oGDB  ڄlNh _ghF:Ԯ!򗼸 rbChW٦j@˼gh•z0pɪD]m6.3_h;QTO#p[0+Ӳ"Q|5+}⛧!Q/Cr+|Ncw̾O%Ka,|_azf0-*%o|_e_F~j9EM%A`4O#1+ϟ*ciuM/l̍ )L"`H[uh>Mif~"ϏUYcez -ZQ=mbT4 RD;wuj9<:fG0{hw>FC/Z) #PE{AP$~>7 Ɓ!3s*j 4j Wƒ j #L*xƣO#Ap H  k8sSJQmlâ/3|e@Z`Xv ^^qxm\Ghہ*e`~Jk b1~naq7-XHvy8N+D6;=/b}_e$JQ7%iC4*e$T=Ƽ[ n߁jd=_+,8$6/_pM 5 ze\jU3X, S1,1p Cj.#ݞLLGMJ8>cut+~e䜐~3)_h(D =`Jwm`xƾi$fڻ^N =DJ~mDE5$k*=079iҸԺ %36LCN6*C(퐁D|$Awe4уpP30MsTuQYF}ue()E) C?0Rk\%J o~(Ծ jG'

ϕ R5UuY%Tȷr( M>YHga!dTK+cS|<.z]{h1*Jm@ 2K5տ{+:h^x)JJcT"\v3%\r%Tn5FPyʑ BOwnÅy!5LJ7Nh#7A*2rB7DǻCr1 I3Aa7qh- [ZܮnquFwhkr.^Se[܊1p67)#PCHMXc!$??J-CQ&X!JyXrF.^G=/)?rQ,skAg\%6ۜG`5C9FcŤ"  8c df W2Fמ3p8HX3jms|:r^GY5ߖIt'|$S{SrS{B^"|#soݼݳہ{B F{(R7ILӤ=`6UgK+RN̷nCqR8Y8hb`bT ۻmxmNg:<o ֜zp xyИ.x+!q19 -Y/y׶V\Wm+/s2!:g[{!w0Ӵ;2W0/b3}{ߎaj'k{?i/LY4ixS~(fo3ޔCT;?"h4xQc b$c7z0Cb ,6%6ҵox7P bsG1z:9U0RD'BH"ufqu7m2%:P!h0Id)+fWD6D).pܖ΃sCq"&Zm̤%ؔG8-?Qk˟a,y?;YC -% Z-$)v).],8gĢxZDxx@tR R討~ ΁eF s!Nx DQ"9վNOKgOMHmӡ{ͯZ 2f3}u绱Nn ǑbxY#u$Z :@Q.{gztu{2c|}H=m3{hmm:?ݭrZ_fݢGfb|,p/wvL`3&WD +_1Ըf`} >O|+@Znl;(m$'pqQrԘC 5>Z):>(c(cͿf1mvn0Gj7S+)jPSkaYd&`۔,c8~ܬ=Y$X_woXpܳly_%AɷZӪpM3"O=Smߵ ˚L;P80{maƭ)% ٌj^9vQxI( u«|һ?O:3NǺrI'p !aGozT)p8tqVbLaTZ؃-jſbb")..|,816v 1gePɂN\I)f`C&>CHd+I٣jK139Ia+[ =Ja5Op4Rp,8Ez<7E 08mxmG”—:-i<6CN s >#q* 6VY39 hOtqa'A u)/ȸ:3Ki9a_;5kua]>RcmtQg0+iX7*ZyX|`dijmDNU+|b2e^Xp4nQLU;_jk9#IX9]TvF~<Ȃʰe$"AK)'wS(7 [8YCk#4B)Ցѐ ڡ EdbrdQ. Ǔē><%Uu>%HUgU>=^%Ѿ{̃` JN-}EGPTS%YYQObVR9l"AQ㧇KOHg;X%a{>-2 4 9. % 8X7tYxI౑qjM+ibw|C),9CG )Do)Ң2R{eE5 ;8PZ ci-ɿcs\C<͛e9O 8[nA?^ ΡǸ=υbQ0j Z4^ 2q਄XutRuF!'?7X Nv9rnBNlJVX'cI,s̓fX>:uzIuLzKmc3J-+{b+7Q:ܬ%eهy=peR'҆1VtTM M#Uguʝ},8̲s54ViY]M~Cc-OhzѢFm((V=jeރup5NuU8a(bb$j>aq(!HO=[yd k] R, ,-za)lB5th%0~f$~ >5_{4uMOγFZd^)d$hBQ*NYK1-~z iqΫ9Nl7~Rr )f/r T_` JCm‡\ҙ)e})'pqQOꛔ\yj_OQd_sF/ktFW(àJu_7_;[Ke|]T2$KdmROwp#jg͑#ƍ?ž ƫͮ'>Y<|m0Lw/,)K ǓcLB“[ 9t6^K8nZOU ])(z^X"?7pzeIB/I> 65‚cjBzkFok<2EQ=x(Do ˩~r4]J߳8 =11]0ᖻB5d$% ZHԡXb L kY -,B=) KOa鹰0^=&MNj'"d#t<@}#N"S,dopBZgi{x7g-#Ȭ`L)ї8XmsjS Av} gHA0fK"%. :Һ\PtS,]"hw#4ߜ1TōB[Vj >%Ηϙ[9G3^T|q r?7C]Dzmsnc!.!a6TBIh6 q㜶>:O S ~u=69U=U`ǗrtgY!њuyP2ZbnԣJ&n oܝ V((}ﺎqѹ# vckUO!IU)榥s]a^p@kK_0 r"E/Ēkj  *lTRYmâ,Raȵ5ʄͅ[cX"Y^K#~8nX98 65Ñc ܬ\QFZfVUFq5O|4RZWGjUCmScAEbb".~Xo6RcJץLB`0lMƫRv=*)3I(+EU1ayM c31B%S0R0'#2?+"bb V\{ixR/[=$k;=N|J8ir-Ac"\ģ$}c%?l:KMO[3:(KE:@AL RNtjb"tՃZ(Zq]eMZ;(i8~,cVqQ8.) Cį딗P h QO&/{6``øA۝ 3LERNwˉSۑ[M q,<"yZ mY6Wٽry#{]F!wGVP? ڥ|8H|59_g"7_uKƓT\ lŠ]kgluX{ I f/I %E! O&ps0r Dh%!L&^;4Dc$22e're3l8w 2-'7?Lnea`WƵ"e^q6'cߌډej Uymƒ|q ۫`8ha9;uЙ3?΃ƨh}YQל}? 䦗Lq؏\ ME"kFX罠-ݢ:1Nv;霞]&1*)(bGbNJK[^ݜ|' )QHa"  >"9pFϯ]3ס?(_:Y1t~ [2"k|셷Wo&Ʌ3{\ArՏ׽M. .-FG0~_eue%+tTfߏ̄;c(P7  ^⣷˿虢&?흋dTt+ Pz\`eW0FÅ8(㣪K|\hC; G!_ uqL(<#E0GnxÛ#8wrK D/ ?UyD5 ^,w%o dWÕF0 7OuDҤ;&k^u%k;fM#3;$z0GӼO+-ɇE?z68!~!PZRHI,y҇Y, r5WK_0>%LLFa#g˘8 @llz#t/Ks촮ytG[בs%reQS`)Ym5tڝr*r to].%ycA%gn\7  p2`dnRf)țhZȬ_וvLFfrq94^Ww\Q_ojFC3h |y9sIG^<|[]jߕ|o\"T]Yy?8nkW%pmg"'/ ~T:TQlZ*^c36Ve') / p\&5ZCDpY?_49]uj-ƲqjuCw cՂp&c$>sdMgM9 2ږP0uwBr\z-ZI%kJV+W8) OFB˚A=EPN#M൜[@76նEtGmvc2UJYdm5+eHˤAVICVz AAiyVcZ$@nr-:<+8 #m~Bv ?iEIhvI&MҠikt)0µF۱kcPnjpGUWp{T挴K I[$ ӎ' Ѻ=PVM`qd o: _S[\+h Iz>Ť8H>^2a)K:bv? m!#c}QEq ap:9h2.4C`Ջ dʤUԢEn=:PAZ&PTP.aw|8) !J;~TD\R6xyӰW?γ78VvF'XX;l;ɷ O.\:y1,&a=1T;ܑ. h, r8e9## ;*᝖F˂k?_ȟ-&t_aEK`NB}?^))C3wŘ-gRÐk3"rRX<a#׆q5G43QjmIEkx$0K5"De3".' -^[gɒgɣ&/A_ sӑ\9K-]쟥3Kè !%QӬM -!ˁ۰Z*'h q]caG@[S"-A^7{%O5]X^OFL '@ S4m!|N+/|ijࡀ?,A1UWP;?eT9H ijc`Pj,xv΁]O1 +bd3:e!4_M֖mMP@4k,c452N@ؾ0^GaЪ%k]=ˌ4KaFy͞JRY&sz;F leAoQ#AoyAײ6-ʰkH1-k)^6SbnPJYf4u.s ڒ4Cx$7YF,42lԪRܴFRܴ!U{ߵd,KL3 2SF4Bc!r g)5% riG$i F0#{#`uɀQXI$ 1!17@Sujc4,q MS MĘ2a 8U3߲*576p7 ߚsw]2 Mg*JCt>[;ղ*P5ׁ_)i[+A^ gw$AnOw/o̿ A<YE?&;v1oK7CnO߁A^Oy{d{`pN">`w{3[AH虢X+ _vlh]@WwlQ] ۆ1~8AX=OSTHky:&Q]ae"PY;>[v=j,H~ݳ3k-j+fn&΀r~)xVhvQՄ)YŃy&r)\m:W=-ٗ?u5l<߬ zf^чOFBsUd T}]_5_%pza;:d gS8.iP?_ܤjb`˓vM ׄ4#R+fQ,pό;O82@'8pAM&/ع3=/z1?7RrZ;nP(QKg՟tX ]S* Lr%Mm )F4 yR14ǩ7HTioK-qPJ F ̘1̄˜Dd3^_[H^Wˍ@-ڔ% d RGAe50q&`x2=L+f*M jB07؇ gHA%w݉jɬf FFHC)xM43a:R)S5WE$FfBtHf|jFq3Z̴et fʦ0NnD؛dַ1b$QVENZI[% "iJXq80"wY;rj34RJhJ޻yzC N')f3&:%10jnq>[m\%& #@.A|fq!DM,~]J$ʴ2I( [|Ɵo%J~E߃?{Kn執>c??놁zch;U_Hz݃%xf&݃pe ?7AAU<&R~b Hgf\L~g?D!b̂ AB!8c*kF^B]|dze5hE]QN=3.~֏7k W"p/ E:ʤ,K3jmQʴ\Ca"!12 瘓H̤q) C3 T6)%©|3iaM=撈eKehS'{@H|#"e)XS2ec>pHh W].)HqE{q'|RL42"8HQ-L2#I[XwW;9Zl^8Q`̣'ב7^4IGx͇jkC%ί?pA8f) p h#jOP ϛ&> -pf \fŽ13.>y܏영r=h]x}~|.܂Y;q`krWAe`c 0()ѵ Jah ZdӘ~k0ɜUA:O=UNj ufҙ>tVjΘEEr5>nuf_A^mz7h\fa\h6Sh68͸R~@C#̜A/c _ T.˰pY+۴4Q鏨JaK+å-,Jy"HŪ;}^e(0"e5@*6dwEtHJf\UE`<.*58%0bGԕ2gPֹcrCQT,]4l¹}ԒUrd-E^8S+S ;;in?( &׻2y~F4:5' njϜt ѧ`!g"Ѻ5@lnUB] %\W5z* CnӃDe]r!Lp0]27X8i!u1 얠-l0i6٧`lHN]Kklp'[N:\vUc0ICr1BZ4NʈFneDgv~Sf>[4&/+Ehm޼7|6#(H%>e_|xkZEZ/y oׯ+nq.}iKH5hf0CJSŋ &Nf/&#hpiX&P0Xn&kx`c6L48]<ܵj6 fL-yᥝՑ#D"~?LeYz'8u1 .0։H' נQ;s2vz1:aUsuxiSO՟ڙ7'pe)~%4kOc~|TŗiO Voyb_ȷLODO9J*zX]jEkϢ {G5v݌lMI]FܿD)% s.y6;iKd}8?EyC'Mwdy~̢[/￸Zʬ|NyQ+Zy퍷XG8Ii].͎?Gxo拺,[x1&(9ֹjͲY:ieY_)9}5_{S?贚wW4N}ű&{o݇_C[/n>s t.F(?>;9?;ſ'Af:O'7:c[V%Wfc\}>øB} oz5wjo. BC; M´"6 ivѕF'T"[쟾Ԫxezz}>w+oKG27ﯭ$oobƕ_]6D1Zyh wf|#Whh~C|﯋pAcWB'iqTUtqR>ow2ON?]kp噇'm(xc}8{Mnu+|1XF˗t喫7nq]jhF΂[=V`i:+6_,̗5=hujoznVaW M#.2=jYvF/?ӗK{wgu x:?C]^1~hAKb٤Ka97nx%Khh[{˽V&;1I0L_El] \;vq 歴 M"wr_/mY+#۸ͅhB>ʰg,Pӝ?q)ʗRsqVʚ7—V4QE͌cB0&X*sIɥINrKX6D"1L<چ씲J2R .xWl\Y^ MV".5pe,yko5b:b2t>Z` F }{;=ڠ!% D 7K8 @jmDD43( uR&/$6=pGw g4?&IB{䓧Z<[U;iztqİl"fC=-Vyy=Qî^߼u=O+Fu, 'ŗ^K_8/w83߬V^vഢT:+XswX{&Ii1'\v"%aZFfȊH]z8c|8I WB؂ K0eJ(R`:, `E\ZICxcj zNEOb"j T%hT>p`ۑ5[c `֘W%R'"7m`ƝƑB{ƝўI=d~ jmd]b~*rV0@ :mVd.ՙn,ƆVۚn):>tGE@+M#6]G6P coZlkZ:%74b#II4Z37P>9 %MG聄$45N|䟶 F>V"uLp<18Ҷ+L!8Sn'fp&C uVvVϩw>+rz6ڛ5UAY絖8ys㵄TYͥrѪcB;d5֌MH0]#ۅ⽱].M' C-ӝi΂w-Ƭ M>nZJL9e-ڊt< ״3uPVT@ys Zj's\Bxb.Aknwрvݺ;/$Nئq76To@l5<\6丶!g+Ҍ2 j \'GcS%F;׮InI[e 0h#ڡ>BFm6PT< [^ Hr:i?fIZΉQ\d  >ʠk!ylc<P fyIvU$Z:}*.Uee+\oeRxxft}_3i*JLz6 X 牷,全e.Ρě͝G 1 &t]V1bGb=0Ɏ;+=2N@K|24`V9x)Ah$gqhAQ{h. vN`\8l C`; 2l/_ӥ 29] F4uV/HTPNs#•M.+S3oKF˹MkSIha$WvOihM64l.,0cDևٕ>,X#N͊IC6ճT LpһJ$ 7"uj "ep͘L.IGK-&1(v|6xH0DKLKOpֲnȖsN uh=H3!LECU4FIY/B/bQ4^(YD|mh*N>IEV=P :Urk8XևvAEV=(5fVOKHm~Sc`yۦ94!F`ο,*J+[oF]~1" cɳjMzzv]rY8Dj"$nqIF8h{^qdGJmii=Obۙ'osHuF)1N~F,EgD2u"i" (Jjg[|NғS}#},jP);QuC+zH,SmwF|;=MR5wFoQ;}-᎕zUƣj^HUxCAgj{#_Qp>E:X"8 )0uWz,SKkGMJ3 j55M>*}2lU0&a 7ھkSֱ0?_!_ ؇/:_ c_0h#GO[EC>mO[Fpv"w,P~,GxƓSDX߻{u 6wN0QK v“}!GG.jh:B"(;Anʡ#+J0$E3/xˤL<VL-:xG;s]o"]1r+92Gꊗ&Zg]pQQ%۔?[Ҧwt9apY"gy"'7z{XRd "{q] wB3vI9#OQwU9%:6i'(hܿ5@#LFJO K;V9~Mq'+7+yj3Jz(͘ߌޓz~q=؏<2FȘ&[ HB(9|Uc)d-CO-^UI o%;ȧ:DA^2"1*׍K"P7 ,#kFM[i-zzsph1ܵy*rfpݳiVL3b&=tȒeBC>:|QuGL ^$K:y;x"c==l(^YNbVX in5jIzDi7]R "[]?4_^xM=;[K0v}\*z-$'Y|`b=ᡭX#M4-yOsyOkY#{lAEz0#i#ש%ySxNh;BΔ{wI^=q]Zz#+xVo뻁m7ql_D;sR [V:']P9Z~<nO j[}ݔ13]իז!D?m*ZϪiKUB̅oZR)҆K-iFe3 wPHu{aH%6,~V{ܥ&TH|qw޽(,N>^8UG;Y$ ctg=Y;@_ed«%]aËY薲v'oOxa5l˛s35wɛ`x7fϬ5Kxc\#^^ )Q(mAJ){te;VnZqu"h᠍8M둓͡$fLlz>N_pEb{_ z_|p%l::Z/V-ˡ4y'.aN\t&ϕHۓۈ>EOnuΥWDNё7|tLr(:2I\%f#;-~-{jBymm1:Sjp c}cGmÉrܫ@*Q9PUl ]HɰY7῾?9 RXdR $6c$=z XrS35rc8ad5pt*I uǥ/Csqi ƧVDE0ϗUqoUS/_0:~󤨮.isҀJ&6L#phZ^Z(9|ʈSV~jak(OH 5![RsX3bփR ?03JC` t1!hQ~ bڮ g͛5b57pDZd[CUVZY.IzRnͪ`JNz]lTiZH=;Ȳ7ޥ^ǂJ!TYV|!sY.R öLI# s7^ކJ# &gٛebeohv [ҷP@,{){":D)a̲7^Gp @jlo^1ܕ# sz],X&gћEbDX@0k¨Κ7kj^fv -s|ZgћE<1:WS>ެzzg WR6@<[˕^jƐ꼹VTJg՛U"U7bf [J%w(j2~jONv>|{G@b +q ZՕTJ;COqצ:n"oͷ`_|}hv'[=/,WMwixNTʖ sUi4 2} 4[7Y-k1\Y9\̴0[Oi@33+tv79+̺"^#AHwjpK䗽:ݐąщU=֎o6>&!P ]MY y\Oaէyv}j*XWXlꩇU V{Զ-jk,5EUɫpΩLO0a3Q 91Զ90c!i#!*t iVUqsʢ:zu٪>[zBkQ;uj kXTv[Ja^U_Q5BK)i+VӕA4۰Ӡ % i#+_]57U47⢲2S\~ذ̝=4mk%aeQó " y+ y"Cp6VPN8=pgj('L8Mm|rodףGLkjILލ,NHsg4S7^v9볿4[ ?>QbnqSOLX Mhy&fŹ(+Ah33yN1SFd8('Ow'OVӵ0hfVۍDFy6pF\Rltniw2lBOY kW hQ% {LzS\χO,*QQdU QLYQw}h mya݇i 9e aF5DȤ#(CKP;*K1֚a6Tc4DTBmiK \FFd1y­M߾~Ks'Z\Hxrsꦃ-%ťk6[B闏/kYd>$ 4 {<Ļ;[$Hym|r] Oltr-Ʋ93orgzz9ؗ$d<;6SLhad5+DJtJ2%Ӫ\bYF "K&2LqI .6O<ֳq.qv}]nWʑU yβҮ-*)tޣsHDdB.Z5JY !A9pQMYYyޙF9'_o[_ H#Gafcb#a.eshVETZh s!DԠ݁\.gBCg0>}P! #9~9?19?Ddy@r#6)]Ր|.r Y 29<^Z͛Bs\AT-MU4u[ JY[J4[Cvu;-BpgejzȊzO'SmC%SeodhF +cNQҴ9;U*'SIFR!D"-{(0h6b;lspJ+wΠMRKw<;VI卍bI#a;\6&Ӗ-*VG$?w"gZ,_dC2"!A =L%(h^"`fYvBUֿ"6"&(TFyŷmBV`ŭ<9؊;L”AF \x W xPHCMMYMyr4M ui؝o{ܔ[t Sb,0H!׌הTXg]o}MAy2xՎ|iLYix\)OJTyɚEʁE]%R=nSeBi#6"SH),R*RRO .RfzVEQDK[MoHCkߨ*sIcO,>A$XFaM%llU(o*>R+X4`A ƶfM8%z¡ɛ؏ 5D bZ2;A7 ^юg1AOGmswGʚݭCWK(A^ ZroMYoyGxl &5$kMde0lŐHS(g|w7. IdI.:z'0"cHdjHD+P9:jgj{ȑ_0sHdip .b;v7;"K>g0l)ٲb)H):Hߠ«8eiq:™ @$c#i0ߠscRcGF9ђ!K\mX A+ie؀2F$  TQ&O]Id!Lڣ?ff<{J*43(A$I<r:nlݜR3s7D}:t g/ɉ4C r2g&E= s'LٯO<غ"@aZ'cHr:@!gKFxu.2y,qrEeu'dr2N)$%)'%6Tl`癓` rf4PF!J2P2ͷ:zO]tdIhd58PBKj<|D9LQHv3%L S6`Jܜ)>;~` DHƛ?###&Gqcrd͵`|ǁ Ǎ2Nƍ5F׍sY>8 PsX$2 O(8p8jJ? S=蕵 gDpqaCuaSZ< 0'^>)$*(!~]֘O}nByLPB M0* X B8a?˹lȅOҜ`Z78aR0"KK8` Rڂ%ґ)tiN(juF @J(! {颴d}`ۍ[R(@@sQ-95T; vyNK9ѥvNi)3Lj5?F)`U뽼@5A(,dpKJFX΅M Ww=8"fݑ 4a|Qp 'r:gХ&͡W̡9]q *G9n1 K=qωo:p?H@D48Ha#>Rj R7BJ4|G } ?^BNǕ%g;.K7)w+Gs!Rs8[B  BNƕ1O(׷ pa9śim,B I(-Ͻ-[`siM$pD(-eݠͷ[~8p0\l/woeonjߌFw;@ee*ۨU`(CUZܹ7oz]—nϵBYݛ[6(/wɑHok:!;p#hm"$,a?;"`jL?z*ei~z[kEQyy XJ5TTilMB^=! a}Bbqg(Y]N[3L_~y_/M }a4A.[=Y'4as~F]'o?~s#۾Wxx An/ |&Û.ze u(@ V}esB&;Si2ڂ0I9F̎rT2F!㷂y4h,P@h4"S)̓u;2y Oo]Lt1D % ŋd =/y^`7Xeba1eVk(KGJ LgÓR_=c1Y_1YiH u:`ZrB;Ҧ|NI'Ĝ4^>٧(PslNP҉`+#L .$')7 c I5?37bJ~R^xxIĎ&^xi_/eaU\)4YS/<@R8(KFc=;[lXk_zH}NaB4cKNiσ#ڏخ#l}zW-B߫VhYhUY }Pjm])Rޓe1\[(3ì-Noʖ1a6Ii KKdnl!2L HD@ND -+(攼g)W Ў 'VƁ`t3si4͒Y4KgI .ba$0]kkfI,i%M3s^ ,)HnKGXXgM<6c_4u'<ЬQ<έ?,Bs^u?}Z_[ Fz),rK?u7z]tK/Uj4t>ڨF _}fݞ ^{-; X>qV#"-[ HŠ b>xfTk1z m>lw=c ]4Bek]z dv:e|1u$ռUkBr~۵}]5UlUjۀ[DW=I_z]GNSD23;b||},/B l_4 ֵȎ7~55Y8e`b r`XxEbAfP@;k2k{[<.nq +n]QQJ}wVy.H֠~nY\ F`rb'L`4-QRqY +Ro״Vb>h` /΂U xժ>uI[~f% -[lma7C[`p{Y^~޴lqRM>/YWp}v5io}XB ~R!)8\Rzi1hKqcc`b,jYGiPc70]]Jk ՗}5w2헃㵷u;Yqh] Q >cU^yţ4TYdjTM2DpЃ6;.޸y4 ~?U[UH.jtwSX>2I[ś+Qޅft>v76IrQȪ5LL 1gf6u-G6i,8?jOH)GÀ "a!y50z⶗Bf\X aЕ=YnRzq8qy3ݏ~b9ܶj^L8 Dkx)-{B !E&"-:Ā4rOZp}?C o@v587'KLd$11#&wRX82GM]JwVR8_s7 k R0!$L vQS i[;)>2l!츰. Lr,e)@7l)A.JYaN4(0qe0~v3P3TȪk6Y#] #nl skAL5o!Xվv|hlcْ3S0|~j^{;4W=BتnR!ȪMϮx5s[ZR5vŊj iҸA#}$bCe CenT!^/Fq^V66VAxVfMR 59 mٕ`'Xh t PSժn5*_P*BpWx/~ʨk{B7a};ɔf})F=vTQYnښPe ]T"S<ይ@F~iѦ9: ke9ҁxiޮb'!E=2gsXy j_=Amn0)=J˅+tI~$lC-v0\vEWGw\Z{?:gIlXka~j?[(~)?Fe#12!_7r]_Ju~9>0 5ڪ@!2ɊRexX3ҧn_g.~?uN [cLUt Oًh?R$t;Q\__Otlso{߇_e2mPM-+ո7͟Zm7*:\z ߍghBnp m$ƚݨ$3Rrfw߆naIح w6FY3jֺZ M3uu{np'uR̀Ue?cSTxЫΞXhWŸflPT_M'Z>l{l×pk5ٌ?G6i 9UtJ\!βMv͌R1*t@jVX@fKF2KQJmA`KZ+Zhkaַm ;ؑW4ovnpMVBNR;]f d F X"a(P=YޯLٌv&S&3FlA$TPl4/-X/)}%mAifc4@ɼ(r%)Af"/Ƣ8[ KmI2Pl9nՖ |Jms9lTi0XK,+r ;X¨XY-3k5"*rhZ͌a|׆no}TRY eR Z *`2)7IlZ -\D[jRTf䘥ȥA۰,eYd8iS晵D=~Z4`cݭ^sbKPؠ(!aC~mna2lMAe%V" AϏ)r Jj+J0[VenJe1f2K~J+}КQFb+D VPPya{~rrV@se.A!X [y(i2+2nr4ES|<)sNH\:0QQ~@[ /:&ZXLLj() RYKE)u1iߒ-H@,`ipk]L2%68ІJLa6x!5TiOa 1 y+Xe9UTGhCW Kn(.X m`V+d>*Q[t@,0:^-5 \ ԁsHSR3lgN Ez &%k<Ī6djJ|CH4yR)*)_,KD08BN^dǀia;kL0I3ܭcU015u ZmZI|4x Ձ#/m*-hLIKDBKT Lc2$3(v5n-ė,$tD\ JTE"&LԴPyE(|R(T)t~;XeG)/. D,/xҁD ›@q+*q/=dA X?`<9Qb =ND@6DWmBl6ɠUt FYuEe`Ex{:&oCU Qqnj@ s^ϠB.dsVis^u@t_-ȊFyN @I&e#s4qFsnj#:&C&C^isWcD8gQPB :9|*i5kNoXN3, B5*)8RY DyՐ6H*gUGo"[Zz׫-IρI@ ~^3M:ɩJ ` \:E?Hcvyí.fq YxcZ\Wbe -骅 #Ipd*"lt&=6'ڔQfk]Ь! z碦q8JV  =@ =7f%<%*`rHz( Zy,5DJTp A(H`!H:P#F' \ {֔M j+@1\ƒr\(\1[v%I^G[Une NHO( Y1<@JQ$F|R Y\'Xd 4.C12aQ4mX"5R&vFO 54VUr`qQI<(2SLGjJҸh ҵ-k$Gw\P=xYk$}whUd2 {`93#0ҲB3^\=bNčrVW)j2CzJ%bIPb.h⌞TV̯^"pHPokz!&\K'`ebBu"JI-bJxIDa{Ē}4\NPklׅT~@EOW$"Neg,2B# T GR/ZrZgkql\-ֆL 0'xcvaQ3 AkzRTc6 Z3ҿz7O=./+JZ`7~>n7 ״fm׫V"N:]t)nz2׸edwaK|Ajk8[Еf{/o}ۻK}8Ă Q!Q=QxQh=yPJی:ߣQG)im34N34N34N34N34N34N34N34N34N34N34N34N34N34N34N34N34N34N34N34N34N34N34u ɨ ǨƨOJی:ߥQG[Ysͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:G3@}mkFF.c\cZuF4uGpͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:ͨӌ:'cy w+~JPn{\^[*%/Vty\_cPMYvWeutWw ́oK샶;J9St[zJm~/ hU^^_{3Y ,FqXӗZ]J3gFlz+jFg̘ ?bRKd4e_Jwʜ{-jCCl~~/^hΝj}[XZcV: U b(`5q ep:R,' b`Z0j0~rE.!0ۣk|>h5G%nǹ}(/n:v80IǤ4#=/&GrVdT@|J-н5Ue{,&sٍUBX|&j` :OdܹӵY6)UIoeofbbdy5z1n-Y%>xVH1b1Jۯ 2O꧿;Oh?YN@Utǟκmï>:ЩN2iRSLN VV3.X;2Op(5(فUz;$Pbydsnr̺v:l9n;*X#Xsθ~(`9Wܱ\9?k=U X5:1 `i X/U vG θX.꡴(6Ri@*!`5쵧 `i-掴p,ر؇>€`^=;X/<w;u`epˁ93f `}6G~ `f01$m8^ ώ ֟3)1\3ˍGRKZVV~)䒸yO{䎾~i2/bu׵!ߕ=bسYvMsq GN ?Ւ6|NTZ|?Yיj} quQ}nUJWL"v0(͞^xtE(j;LWV֠P` aCIW֝<J1 ].L"A~=OW4v=+鸃+O g 1tEhɋAB^KWOW3m>jO{꣤=QbW힮Vm9]ϵ(]eL1! [KWOLm86.VЩ tEAyǙLm8XY^-yb=.VMy fcANt7)y?>NmT;<-% vx]_uX yx&cR5)o[2yLg$-..mv1t08u*7e9-{OOy1YIf1eN{멾z30.v~g\n,:{\wȺ7Ziߺ0DR"8#O6(`|:M˃qGܐ(_C˜}}2Nv?>z-odYbn;n)҉OEdyVx>64J'2 zN$2UfCc5'=$ U.ڽr?j*p]hDbyK$$g]Rѵ(X%QGAG(*mqf=ыnV i?J.x4kÐgJBL 9NR;׻۬r:6^^B-EC;qӷهiw4};f[5{>-)ܹ64I3!?&ݷj_07#ߓɼn.Ub&Ꞹ?Mnf;)xiIɮ{^F:rHA1@on,m®]S>q>fSob'cm$9cnޏagw3ۙ[ԫcMdɑd;b[Rec3"IV"Y  %RREBF{+L[͟ 4XH@ _ʊ{ò})3|y7 ڱ Yn*J<཰HeHQ)!K$R:9I`C>4n(EV>|>Ljfrg8|?@  S-vkl7Yq G0.or?/c]EU\wp$|q<\`_o XV6?.>,wK_ڮq5$Մ7únV?&{cm/t>|6'޶lS/=~W\7|ʛj=m\~|;[6}s><~zux*f?]ýz흱c EPda_,K+f˩@ *rUm}~>/.@h:kԽU>%=7߃zkV3R^F}7?2."c*T{ ׀ZN_QLדW>1txZE'.S>%.R ;^-J<1Ѫ詝G1/ow~y_<={zSs;\t72 xBU_zllG~W_c{x;$ClǶ_,B1b$`$`^0^0^0^0h1{9&|^ _ _쵗c—c—cߴ##qyu\lk=[[HLqQ)߶w<&B%ݵ' 7A:zZU\9Ng #EЖH qJ.^w:]=oߠыFi\ dp| z͒fpL1b+yiC {b5XZec$l/zȑ÷k1s%F9ڃA *1@$uP7 L]!O~ޯ UVD3'‹(AYC80R&&HG*r֚ê R##D7=1cŁB֞lMBkЦ Lm\6D F#33dxO&ۣOd>^x_k}j+3E@7?}⭸UeqPJXiV-H鞯q ՌfI$[BA.2,H" D NqoN==18,ǁdjLӯ䄕_V.Z[hNyq.0CqpޫH_ i,HՕ ϖسrx_,Vi@yMvq pQ,`<3fP6`He=c;~.ltWQ}̐b= RRP簱'!<K {ֆP=*8Ŋ5|S6"U&@fyZS.,/ȇk_`Gh'z7+Ӈ6/V̋q 0Ù+o,@դFm^ p4 x_].օQVҐtMz2V0eCq {Vx|_ӸtUm/ljHrZX*$x-4jmFhj3 /g;LXAimfBcrWze38 0D)/,іx'LO!v4bRE )^HP2qC5 &Z3rWLX%xQB{Nĕ]*KM92x1^ orvC'* hM˞PgPm3-Yf4p͌zzVI>0l'YԒEI?-H6}|UwrЫnx˵Z-mb/9ރ0m;7~ri96=lzmG1??ǓB$WK1EۑLƭb<ډ M|#'gz'F Z(`,>ƳB3Nٷ֥UgEj p0>SSr>||0~wЫFoc<ԔYeB$WK1GfSҶ 3v'j̳ԢFPϰ26jȳ-EG0O6(FRnS'x #o-Fp4I/'mƒSlVI5aFğAZVM|imִvYQMa!=z%>{uj=Ey$W3Ʌ7q;Оz6B;0XOf6j1wJdGkʵkM~#o˝عJcИp?7n1% j>mA, \r>5v臩j0-K5sc8pHs疙jޫ/_x#ZhLzpXvՇ:בo]-|:M0ei' ,W5sVu MBc­s-7#k25E;*Sv& |Z5LE^X1`(dxS) [1`K૛fMWhV7)) X TQbLM> /Kxi}fApaJa .ZRJƣ.#-4j#p4Yr_ ;/5n4=^*zjZBc=liG19(g<)k}o8qwA:u}Ǟ`nJ<()Ye58D&ח"s<`55ޗjyyw_:D1ln1iX~o|WXs`TmL3kpSUS'2fP{0 'S݀%yqKy.QdT{k1ixDoC ꎿϺ/z}a,I j vB }P}Yk]]V TfSiebȳSX9>>utZ=m l/\AXgX# -=1C(UnJE9;A_QKYj9PAorw1u9<ИIc\ U.U~N @O8c|@5MGVlY}~L * G5c\S9)c: ;ƫ&uBcI^et:1E[h['GsDeᦤ?Ox^TW'e}=Xn1T+,6Z+;ÃtxP}Y>`6n4{F3ojg R$Ք8NKHf '_ŀ::Q9+N9tOsnT]SP#f+OIv0SD3̉S4"Ā4NTmS2DJ3)ܬ{r.Ȅo-vRNŌ¯ʆןk蝴×v!pSlt_RtLLx΁ć2J̴)$qώơ8:b\)=?;ϝRz/:o"%*%,f90usWسrx_[1JKu]Cj\ ՗, 7rz~݃q*@|.K`ڳx&H )/St9rHJ,03g1]8oNԌ١2ՌI~y`Lqa*"bL[Wi/@hZ(,w8P [ 4V Th*1XöŞ@ |0]U@h)&X(fHsEv ͮnUw֍wf|akF*Ty슂t-`MP|ZQ#p4sQ=Yx)0rT@ (at*m%06|YENwQRYcwIquQؿxw,/n;fߔ,ٿX_Ί -WU$>-]}5k{2?}ϟ~=PoonfO %Ec 'QnT?[L[͟ UʫrHR@6sMek^i"[*.ϾR!l@`\?˜n;:?Wo|~yu˻ \*5?W~{m_ib61v_j7Oԣj6qXbUEŪ>o̥GPC1i ΢C?yq4i#?p\`W7~/_f%xa\ZA꨾D4Zx6Ze]$_/mH@4T"i1aF(8:2d<7){Jjsv8* &SNo_Xݦ>>ʺ_V Ʋ[6=l?ʏ7Eqc=`=I%~75 ox40t0mX|Slپn\S͙og6 %R.}I,.1_ GToiuْu^yvo[$K ~qGx1wͻҬ=j/"sGhؿ;5%~0^<5ѧH~٬+vύ~~mvP[֮ϐa;;^>_R7]. oUTONImLv> ]D67f\6!=Ԭװ<7@Eo,ΉX}<H ?%>"XSp `w`"[Zm? kR,nLW{hH?۞KF\/f+Տ~d*Zp# Dc-E-X{vT:$ y ez5}RM90$\PB#lgaJ1BƑBգ1Cݤ*XS.7o^:ٓ\cۜ^)|nҿWRd1`]fLޔh@~>\^9~7aM>[p>}ϻ|tkwߎۏf6q>?[\L.% Sxt[Ya?>|w6=臭AWLnUdtʙ~Cu *7u@#w}[x7 *=\)֬ ;u]Y{ʷٻj&P+߿m\s1u3⎈I:eU-kEaA͸2+3=z[׽1(=ִʗXEtn3ە镢RK OrkjXi*w/w\ PK  OR-v%$T`oKݿܸԴC5Ѹ[`P ֓P۪;,o+*c&]N:]4=+zv!^+Bp/O8[|iƊÄK&m-ºg!}BXp˜ ,4-;6M}ѽ ONb$zys LK{_ʞ_92fQ}B9# ;&WD+ 9S3U4fQ\ztQEnhة\ pӱR0yjJ[h{@gɈg1NFҞ1;#QўPt5L/0L뮳uv ׺i8!OtE~BrQ^?c^^W U^DK _\H-, VX(^au}JRũ?UZGp v8Y\C# Og}#Fڛ;`o^!'(4^yr=>Œe?4_]e^ql˒?tz:*ZJ,Q}09+,rO5ޚt#NN`֬1֌a}Qeai0Beg-d pfQ*柕K֞L=~YÓ_V{%%P 6A\P0"ȳԄ iNI%ooo[9[VW8g&6Y{tWp%&~Ÿo&xwJ*pPGV[W<  ?V=~,+N޿zpCp6[΋!CV/.[?/{fm5*1 B!9TAQ hebK׺G І$[N,HgPFh^9uM  dbDՑ&iii1=yz~}hyysxo\LTXJᠼbQ$p8lpN{z~5-kyDؼ l"{,ҊiQMY}+[C{( IBe"vF)xF)HgG(X9I@ \y!)Qs.9p傠e`8Eh Je,8c}DCSy{sn#^1s,ԾFj„$F9‚F,)w:Awg/jM: ^F4f95-Of $a9'YȅfT eѤW9tk&01fݸz4noަt-Q V(`616r_2Fq [8{,8y4[Wa4PޓЎaNKAbpz;,]৵9䤬/r2n/'HGMXVEf0f{۬zϗ]Tcr*VA{L<9Gҗ*6!eFXj}gaqM&ҕw?A#0xg޷+~Եݺ%T(a]`e6+tM0'XM Ro |\,P{h,GpYr$j`bj 5M L>A6BuGn.d=xc_vCwءGZ0.vg]0C]͎]E/1!b.59erm ,bnDWS5vA#p >r¾'3 {fh8ldl7';zʃ&̓XM $B\hrygx(|vOA"ɀκI'ũ̾PDk1PIBftEvEek4́ϙXbF9lŁdذn{< arzh"7R -6wQ7 k1DzVŨS3U(L:ЧO ^Eq%ke/vեZF uM7Xt\aURscFr 8,W/hݸMVc;Ɓdu4E+ͬʄ|sʖg[hX6Oܯ>lfwCtEN"$ԆTYAid$h3 1F2/,D]]> xϧ#g9Kq8y^\QGEK> hYެ^x`Y͟weH)/8Y k4 sEn8"aDz|eȑ\$#\E5m, К7\fct8b@B* Co GBFUѰa9|,'TяiV6m^ XCc,Y U!DKɘu9<=פR,T %xH̏$/щ; ]ڿCuZȧJ{zs7.Û&Ƈ=4ZkõQcYdMkS8]ȉ '14MH)ɘq#,& ;3GN3j"Td " 뺦FE8ܨ |,'4tLI(IfBП).Ɋ" Sb#f4P#+c2e/~=@h8+H+:6@Ur{ 0H1ÑB{l2ʚy|^$3c[8p܄BԺP4Yy Er "g^e1١h]C! Ei̡E)ڊ:76f.q5҅r fv ( |\'#6"J>k=JXnqIŴ7/vRj3&_i`} 凧봽 ,3p'aj٠ VQprN6.Oqj*^T{D*mhQh{2"yd^1#|$waF&ykhZ'_qL%\f˥7k[o(nb뛵øD 3S p0. 2Y#awלo3̼`4U*9? ,i&ߊ(L&dZ][''>=KDD40etG3̚~`놭sWW6Ǔ!0vu=Y4 0f /엄Ogb|5WWնixW&~?'^*$)3B+ v'+]OV۩eಛ)jUi2w?x>41x3 +*_ :4gﺚv=QxG;~K'_hj槥퓟&wn2ʟ6 ?h>ͥ;`mcFf<-C~I˖64,r/|țqjmgɏmg~.mͧvk|0VOyX6s!X\h%D)+uYQ\،ZV2UVd!i^eio>" E LZ LBjr~&5,?4dz8іSԍffTt4=l@\N1Z*,0߲sET NTg(YpVN%W)#!w;hq> (]KxdNSbUa#Z'xz()o_R[V`c%e$*.PýPA+_0]<"E'= V/==,Bd/ϯ7K[B])Sw{}>j1Y=LLgoOV/0 *d{A-*tU hsFo糰5,fɝc8 9-1ݖi̖WY7VLtPo)nʩ_m0.aI l1ygqҕS}~C/z^<یQy hN䞸tI@U?t~}lfr:yøHOY`2! 9%WC;7%3LJ9O C`\`ghqxuzQè4.0.be'phQ[kCਦH1j*ieueq*1 քԺ$sZՖEUeWs[3ܢ$TcWfj^%_vw 1_^LZ2lQ2&2=+ ZJmq&Se#d:i&-Zl++0#o<4e.!ܗg=;;M n90ϛE-(+YO)| jʻpxɰj\auAccw < fR4fL?ƿE!Ԝ hxp&ąs?]d:E 3iCg"T9cc9,/d,K`/%u\UDT?gfVQjetذqdyEm^c4S,l.nrC fJ)I1*8MtΈoOgTsyp:7EzAvxÀZ7ix^0gZ0$3NF.a"(nC2,2giqF* qW8PibTϒ cP*(gc#>mM=]=LctO; \i4a4!s>>,W+==HtiI~9>}iLTxNeOۖ ӌ1dFʩݪQJv v%#j ۚ=+l83ff~Z8}I&2yX~xVO~yBѱ[u(EJEPWKPo@$m:Ψ֦r6\Uko 7ә7T0S)%%DRZ;p:h|{>BA:p:|al~4 A*+F2|#Ѥ#XѤYZn􇺾 _`=b1һv|uu!ų,IBLJX!=tNQia͗b/w?毋Af|Ri0^gql g-'V3nџ4N~i=Y. Wnϒ*h2~x١,_4e$q4'+(*ϕ0x+-դe ^(D;%J Mnmq'G5yS-{z^0_[l4ؖxJv}2(u23t|=/q?hw3Uċ}uiKIAVk8Roë6ّ聧(LPgtI3JvZvH43t҈;h_Ίr[=ȁY?P.57hj]Wz¡A̵UT<>Brp:( f! ųiUq"{F@ =җ@Ȣ7+96%'% kUj^1y9r9BJL@9)`r4b3Gc1<2_kc<%5\[|oN6%+pPl7ߙySu}SpFbj_wL·mnRJr-:gTY¼* tЈxq]k<$#=XE"Y={ i_e}Z9{ug>aX&} WX͇V])v.Zw{7,;ZbMMz2t8l )NQVE.k?@Fһ?ߍ !Gdž~=P7Ԯ7[=% iy?3UZ-*YE8.t_ǁ-qxYmM JWiE]B )&:$(<NŽq=c׃gnk`jpqytYnċ<= qV ㏑4*"?5(jA~X2`!g^$:;/Jb9[o{( LqW[iB6JOC1 =wDP`&܍ Z>C2g="mYd9 V|*Ώa042G2z Z '+0`\zfxw_ayeg]Ǭ/sI&SJܪ cF(MkDžȬ$ց3֬$RZ O,B- ycFъ/~>,h[ D\?62 6sme`^M ev|MrW_$z pЇ]?qJsևW=bgPNS4זa żMVJRUp)u,r8 dpTp z..㛝:]}3IHg5sg-9*A#2sx jx(D`I'ۦY?_w7s=T h❹E ɸ{-mǓJT|j^uӈ̟|EEH\IɹqXJd p3+H#ts3l=P42ɿzɨ4'̔lxOMUwc(TSgj ׯ[eLŤeE4"3gM L'e)M! mrE8a-U>`BP 񨔜"-bis_gZٷe V+#RI" *eqN˚fl6=SW \*J 9%UFV2F:hDfN ۪z8#d]^!/{G! QzUѢu>`CkKyz^ȥ$TԊg'!Vpa?VulsDMtx-H+F(t2+PVG*2Q?(D g:h=paQ^4ZYӈ̜_ 4NO 4ũJo~ʦ&1?8b$Ͼ\Q2@8+P"U,7#A#2sE AE*zN…EH 7=U^F"9 M԰RtA``*ZXM!Vaj<3P^GKN2C rG{NA#2s8lJt rxQNUA#2s8= `G]η5fRNff;=2C21X,fod'Gf>{T}1;hDf> P),Z FS\ PiDfYE }2MSqM*2GFD /$z"[Ԕg=@X{i:̤dab+*FEXUZYH#ѹq#1q 연$3pҷ8{M#2s9m2g-9# 圁Q9"9!b<399m~CG~ b8 ut97up}2O/]@E'9 -hD8,븬`"^i]<`04(T {L9P(z>}9o>72."V9ƥ~wdfA"c럯j߼ Pca)FBU1Ύ#Y geTdVѸ>=B3Aqw! 㸜"-y%ᣬv9/,mH OwuWkR{qMblg $@[PGͿ_(Z/IT F7`?\8ٌ3K\M0Z]OP,Gq:d-36dY [mkctk x%u`7ݝH7@7Py~qw<}~Qe“8Zfa  30!NEOo ĐcƢkUU 8GB].8Q̚'+p} ]xaͿǬF#[Gwr7h'A 4ߠ\ޞU @&1@n00t@;qFX>9 8z%T_8a}V-"ЂK0)V&#mc;Y*DLGI`w`3p)3ubї; ٍN%÷%7)bN<AHl a+݉`\ F܊ts/Jl^, 3ڡoh`Jy.BN=D<3x(II\g=|+hJp.w5^| L"!87(s\OpBT2 YMVegOcZbOHźr*C-]S@a6+:4;͑dDGC*RJ4uf$MWiVyBsmV´uX55)lЖCO$h* .'c8K4-QݺD^i2?%Z@O$jD 2v!QL*@-fK9er"I<$WZv y^wzSOW"_NJrSXxJGZ9 RF-*e#ZB>w! uk3ˡ'4Wx,diwrpJBLDg2v):# S%집-y5k o9OٶFeD9i)Dx&3^Lwr]3}3Vz3&*X pJٴ:M+f&24Px o۹ ߼:M6<*up¦pf=i1Hd~=܁`Z{d5HP(FOX2~GOJ<4^)]upvF CvD&fMMw]vEL ?gւ]:iO\3^:XT$lYXt~>>'Tam.݁V~',AvkT1Y}҅T~몤6&Kx2J=+Me.e\9'`rw- *Z-0Ⱦc:gt+Gp!+ boa:\2$aĩ1xP''mIN&^>Q 'VşfOh,ˮXH/ɳQְUs"xOԊvL!'B~Q]SADzɽi6PM '5"{f ndXR6rf,p9mDn ًXa,Q|:2T@wۿB3aسRa"NwgTw~)?4S@OhdF|aWBMG<M۸D6hhvzaBQ}.M .:3l)qxg$=4 fE#׿[i6Y6 ڗ+)!'/h7׽QwKh؅Դv{1p},2-l2),.Vʷ SP7 *עska#=10WpB=L`( \>7xDG IpYe33 6~ٷlu b^ܠV<\'`򣨳%3%a>/̉e{.Gu_=jaLI]_݋@ 1Ҷ<ٺmls 18mlĶ?y%ь"&BRjM%gc&9;;~7N~E!Uz6a=(g(p`wV~&8f}L_x:cA6?Kj.Nhj).а A~:D.[* VD]3whHMJmkH;+|^ 3_Uv,_aZ~ +wa+#[&ܹzWsJy;'8tjWwV޽m|2|;}>KTmG%ώnG-ER;Ͷ؎J@sʨ39+PVFW%Ly=$Y@A* t |,e4e5lZ0d4I>%7lO1W:)~ X'2A0S>XW ɨmxRn* +9a2A)`߂h}Ӓ5kއ\)įG_T8P%hJȡۡvsC9nqC| )h٭t t-jG36O0a>o!=?ZרB6y w>6h!l -3ߞ9OیZZi\s Aׅ [/eT>;2g&W3sͷ _m¶fK_Tc[c ɗѭ}Z~3nϜ6 QWnQZyU[gNBFXd2:dkO:H65ϊ6כ[gRo6K+m)oMټGn]6j!~3r &cRe-ꋾ? >~h!,m#Ujx_%>0I}$@py$VHmB oVHz~u7`m PoF 8Ex +)Knv<ϓWNޫ!q^NoFW{$wKR?,R?\Qdp'9{OGg0øyxC@?Þ;z}qt^'⏟]?(v}Gonf8FϱGH'`rAr;SK_A9)=1r1+)%. /刊\DJظMԧXwҡÅ-?pL)"vC8H EpHD" ȔP pOO.O߽;:otI aWyRgOOz(+y"9eҍr8.bkI?<9>{&L9%'\J"dCO[~PGLI]MN  kI}szzFonDt !vKYHI@dؗ,p?WpLx%~s"z{r}2LG~D #8NK99QTQ\X sdBd-@)G y0BW1`&byHaYdp&jwrt_N6b܈2q ڡ*`6(X4F8Dfx:!,qki>\JIDI`@X=0.`a  ] D%.aC<f<ߧ?=ޣ`:><:={v >y Dv|b8h `Ŋ?:׭TJ o ;Vʃ* ]%lj A}2Q*mw-Lm[L8aVqm शb/_Z ImBtH  6t>}^uOJ_ozb}2$ke-e`YEy^@,.fs1 7:vF~ny N/:_7 o6vw;7߾sz{_)T$B hV2aJsDf^A !G +PKn rA#Q *8X1r”C)Wk%O&XZ\+^:y*DL[ T\YLK@t 5Dj*Ep_mX^i`kQ aB!ʘ8P=LI^fDi8ZIi\Km# ke,&e+N(?n۸Dw7KW4?,Q՘ Z %2-7ʉ/QRzNGFdnRƝu)UtTLgmr/޳MuXufĵ˫*6<$>Z=(Q7x IUƣB@5d:^t#;&ʒ[H3X܂NGM}('~ɸs2ǜU`$L%Ad aA<ҶpBBd`:jXL`.V*:#Mt,WI(".P3Mgf <Z!aj3.M: 7>TM=5nl .]b]?Zk`׀qe+ًTK/ 3z?߷@l? (]pT'J?msRH)VdڲO[B'Z,:aao\[;>{ɻl]fPyFJ!L01f8σӈZO(t~8_}Hխ#jMXl>}W^$x2T~ & c<9z'n>y|d4˳+/ge @:/%~ >2jXXڦ3Oԅ v~>z>OE>]t 1k<xói^3\VC )ٻy Bl9n~HSI3Ni^hi98TZ# ܕ EQ V=k N@ds}Qɚ"`ZBz BkU΍Oʯӫ #cRŪ-kƗeMZV/A=?]0 mk?kpgݾs~ InܹbaA/ZaaѺkwABdbEu(/k E@7iiqmRVX:]>뛛/ov8fsy;};(vYqioAtiLF|^BuG:njcLϥĥ~w{8pѥJW:iP : 3<ezKw__4bůhAb}B{G=靯h'd_S sO0QϷ={<| NF ogs,U񅷱d360mdf^`B/[qE0(k$vl}epֲzZp}%,܏ cpP!b%FP/A15C! !\:8(Gk,!^ͼpцhn.6w!vI.X. ‰yzf^wZkki6>:.onďR/fBi d%@Z |ﺥKt8yQ,sM?ZcYgH>"Z Z%<4ƬJZj|5BFݻ-V5Zz{V$2Tcfk?j 1ufB1MC"!\ "GHw8$RI#=!C/Յ<\<se졘H\E*{4WߡR)~8*KC1WZFTrhCs䆬kr Do|o;GRT\r%\kCX3((dQ7{>C^fϔT>";{cO> !5!X(-yF:ce2W/35P/`upi0*IgQUyGJQM~^fEҹV[bePM-?ńT\8,QTMbJ] (9mR=lWKc%ѹ,yAV_^6ͬV]nMgyrW1nPV(aX S$@Vi)՟;Q6Rl!qQV_xvB<f|;A3Z"NrBEVZ?ilfcnSΨpw!zjǐ~+W:t%IVSjdݢ@BTY|6K߂=΢YbX./Nꫤ$\1*\0ש|u|Ah(z@d$X=hDjk/)4d 4gzt2YngχM`R(Ths(_ SxG6&!"!?{M\ @k2`mzEF Fi@죤Y ^9|ϵ,wJx)m\=Jsc̱|﵋ڀ6[ƥcqi.^uC0wf]ރ8h"u藍,OMqɿώڊхc [=ӏ\tmTT*JVvodP❳˥FgZ𛊤N߯?13NBGW*zйia2.("fl\x%}Ԗsd?ӝVLJ#r0!q!fAV!$pJTT0+,q01V\pվVjbeX q".8OIAǬ m c4fB'9FW3dJ- eR+C0!UXaCc& ei)hnDLX֔VㆽN'XeC1/FI`(3*h2pxMh>50 CfH͗I,ցU"H0y$L.~U Qʃp-1KFM%6bӥEG!/x*HM0^;@s`1R!aTb.r#\p!#i7>w5X0AF" t3Xeˤ߂ C`@Dt8@l@J =UsPQb;ACq4rb e9Dy \X#+1FBbW0*iĂ"u F\Z8XKF :ł`iPY \ /E@!Q4q N9ť/{{%N%gR)3(qph48/M[D0˳AYM[3xlȌ [25J Gޞa Y/X+1Q !SRZ(Y d3̄G 0_Mע^Ot1˹Fpb|1(uȈ攦 B ]&0敉]Iy͑\p r&3rW.xd+KyPbay6* =Kc80$|"6-L<F$ 3uBbIP̂csL l2ɘZ#8)Jz@:Hީ8fp 0]aDXsL*+%5&K׬d)|}P*9U.QMx^8u} -C>(ӆ D /PdTH@9"7 zq!RZ`J8nC .`P=)jyPO,4أ%9g| .ӹ́RB|y]t]V:DGK2Y4t ‡0ub*;kH, :/Bd+y]a;J–ͦSD~>kd*`17؞&KMGv szYMɍ繒'g"D%#Ritbn/qn>'b!0H}Lgt t0x]|,A9$]ּ ޱY`4FIloZ۱ )Œ|"y8x#bIDSEDBHK矨)u:qruD*QJԩuD*QJԩuD*QJԩuD*QJԩuD*QJԩuD*QJԩuD*QJԩuD*QJԩuD*QJԩuD*QJԩuD*QJԩuD6QGp๻3ߋY`08}{tx]}17xVރy1?|^YF}g€Ȅ*~@QhB"k# kINlRd>#왱E6 4qsUm(A;<Ƞ`}kH[92ڙ{6õROdɦ?n%ny?ggswpO?hNlNuG( l;o xrfgK3WTn󰽳huZ'Np塀|iz~テmַݰFzr69h(9yјvo&_>톞oMt8>F;h0t&,i,Mܚ{+tH ݅Evg[?g玃WW+v#˃Yw|}ҝ_k)%vϫZJj{uLxRo Y<<4ӊsR#7K[{ɛv{{u`e47p?d?oƩ~}Y43s&|pFSf?>k6~FR3Z U ُdn\ 7F * 0CL lKo?<C6R("Uƒa IdG!XХ,,Q:,L7wR,:'eg+ #ZXPz7TP'~48e$瑎PtuG$Q|c旍 oӥrU.:svWm(LedR,Cqb!Y/CXȔ/#w,uvbٰn>ލo[ZUԒw㿶~䵍mDmkQ׿Ƨ%& j`8^JߑjUa0y|y<,1eXxqÞy?-Eatyޢc߽;*|AGVl nmU ;6.aPFƔ\)yJi¬,flܘ5}(n{r0ӯ2~co$W2dZ'+hEpz=G??:zt]-!B UZ4zhXHgt6Hv[̪69~*o! wS.znr:8l̯l~~B }1 X̫$z㙦ƾnl .9g_Vq=KbeEyzccwE8}ZחssZ]vte|wT[Ѡz<5_GFV_fseCSBF SN?bzrtyeyĒ 6~roNA)[HuuzW=İ㍯//W+ɒIf"L1'OFq9l5><Z6T)rklCO3qA/и˷?ɈiAEh47Z) D»#-#wQ2ͣcW98rce0ߧ$N?hdg'W ɔ2$dxs·0 |Dg8Rj?L'lv]"īq~g eqmڅؠLb:q ҇ 5Fk3jaN)I*c38s2yƵ,($RZVD|6r3PU=r=tK,Q* q2ƞ #[ [in܇)} ASisu`qx/ini⮛iT'\weHb8*N'&WV pTɳ2jyE;y׹U,=Q7YKdgT,*suZ1ej?h5v4ODž닻KmAF j5J923*ri=q2T&\4P$\įQ*d<G̓R v5K6$D<|iIo3?H;͓n^=\NY,6l0323>m2#YSͷ&i%Iuι]u H- qܗ1 (m/oU*x^ddO*0SAKkWQ8gJAI}9#{yvw}f숧JG6!)]Oj{Rowdir1EPǃn8%~Q'G,y`~Caٰ< ~¹p{>:#-J˻ZTV } Ngi.ּ~4_S@57%p^3w:j%?\o,uRWE']Oa6WIնΆ櫿 %T><}Ǜ_1Wpp4?z2]>JcgMaګ,;iW @^3ޅ#o.ZyΪv_5o7l=zCRtk^8_~{d=[d#m^̰W&1yslr\>m;?$S ?L&M0"{jvziy~Q՗w͟ł_ͷ5O[mj$]\k ?n0 w|q/*>ԭM ۫BtxO&b{oy^͋-x/6ېsU>M-{bʽ N8vzQ{$Dԫ^?|.W(T3R-p& vxԢF>8=z/\uf aZ6R(] Ŕ.P.!~]N _!-ť=P}+C 10Xx-l2[jMW3 ;+,vQ #ŽtV_mq;,>|p7uþ-?xiفy!zX#KUBZq2M3/&8yix2I4*8) G}_i.W#fL1+*F e-ENRW<0-GnFt1D],h9K2 - 1fT4Y2WղlBHC5MU:zAkռ ̸CDOKR}#w>Ltxj:(U3ΙUYT]`22Ii0w W0VEkI{_:dB6Xi2jIԚc_iJ8_S UR'p9Yy%~E(yyf4E)v+ ?TA37 `"#Mɠ-c. 8SƮ2ZWpF'/+9ƽ^@lO03))GA]~[n_T/9B"i+yGxyD1hbT>"oVb㏏&pU[VbUFBVk8 RI 乮X<+^8:@Xk!ep4 _ZX?0fbBTCJ0Z,TT.JI4vgY KgQbi`Nl kf^C6vn3hM083%n+)DP}&= ^m..9a|.MV՝&}ODQ85 (_x@<0`-S!MK283iGu$V<6dZJCH,yTzr%Ōp|B ӓO-4{qd X&|D`P*c*ʦ)I1δr*ٰ,f`I p2DVJ (*匚1 J8 ls$eAcj vԡq +%=*(IlLbgVUQc%Gji"j8u4:5 JahzHtxeH WE7 JΘ @<7Hj|[v~2tbVGr|BTRX+ۉs!u@gsa ݲ{uǾ]o]}6yD-$|,x #. f@,dSWҥ`ruA"L&a1$;`Kj,(t_C%(x$H&rZVȼ"a>X\8 e:?lG:) /. D,/xD ěq+2p/3Ύd P?`<9Q<|NEɜ/+II|'Ӈ>KU/CLm]LՐhE6F84=gkƥ"Q1@>bQV]k)b2HcBQhv)0F1-xW3K:YjU<VڜףbF4X%Y֨48^p@(С pd.]63jAqF|d2w1`DhJ|` Ü@#܁?U2ӉTI~ÿc9XB&0 ֨ `f)U UeV hߨtf&= H/J-pINYBe%kJm0ZҘOWv`t]дRk\Y\C$hoѨk-骅 #I`d*mt&  wKQGbpպ`YC58EM?=?yE r\O.A"F/ϸ `\ة D8RŢ4xTCg= ~  ڰ(RV'#80`ci ;725 5YH>9NLQ˃nd:USEce y~3(Ws :KHz*R3QbU98F (ǔpJfP z̀|Qbe\O#0%7aDD,TacvNQ}6DUǥΓ`ebBv"JI-bJID a8tbɱ©](BzōWW7-! ¤7:tCh7N EKzW6^9TWԝJQu Z'Zѩw{w|HG{ZN>1ӭ޿o'7ƕC [ -|n/X=C5uۻ,]W0%{f7t>g>lj\7.u3tTu3ZAR):g- >?agmbݾun}}iJv.iV*!f! c_A&I]Y^m?_cg0 / /|UvGg j[{OkFݺ fͽQPWe@_9Ii߽oXEQhWJwQGUAAJD[\cOJK:{1q.HҩĎC= kMsC|?o}fEfl7GdFwHdp Ni28M4&dp Ni28M4&dp Ni28M4&dp Ni28M4&dp Ni28M4&dp Ni28MuZ#JK^jsbE*,$9oOpOgrxҕ+}j0ԏp~ >r?L4CyMfi\\&zWA(КwC5H݆ vlm`.h﹨ېJJ*ϦZQӣ句ڵNﱓ սuRow}8Bu `cup@W:AD@pmlr]c\9PQzO^:5OG'5'ott}H,Gj^]}Whw~Z3O=V󶵟^3E ^ݛb#7` &·v`&񰛊-orɍ}eKxSDnM)"/^ 0 Z2<'fBt39sJ̀h d6Z jMzI/}N2iCM˴i6`شLi2}҈;'e9hH籝sVV#/+8P/mJB?*_(q1%PT5fJtDS.(:Vh7x6񶟜/kJ΁5!+ ﶣ5w\5^bO k-F(H "vÇvc:i0ߑ1׫˲|_ݚQּZI-wک|u]$D54 Keǹs@ew%IM# 2+ѳPsJFYv_sx*mZ(5h9a?600:l!> x5F$ڔAU}%8)j ^(: l@<1OŕYbjn?DB ^i|D05,: ,x@ : xi >[!TLHi"+C$/hMxG5TfjFV"`aE%<~ृ1/ IXӘYH8L$4448հe`Ҵǻ -_{GMh¤i>%3:]W!dp=1?DB  /j0UGY(Q0bV"t[~S R\-DB4=Hh5>D8d.dacj -{'6*EvS"Adq~SV{Zb`u,36ꄶ" xЭ^kkqaBd7,VB¢+!Zy#KY1 :ԠzR -k27p13q iLd27dc~ M[ᚃ/0Ure$ITV-1qXU kUS,pM6/Hhu9/XrXZ-~Aքg4+8eXr5!θ)4iV2L;apJXQ xoWx&T9bb@1 1a!(U Zܵ] M0.N2LA8{!Zʵ|趴DvzC$ok=TȨp [(Zd4X@ NU@¢#~-FjfyoA ZNd6! M{G4v X!moBZ:wx9>cѭ7>kcdr,egjLsI{z\ o].$TL3ɴ,Z.PK'K<u*(ᴩys,,hA8SŠM x{30L"2h1Q|1 -sh1)Aq$ D6\Ya>DB T{)#5&K I5g^XW5յo- ;L<&ÕOp.M} c5."C$/k1KI\MRe*O8y` *Vܱ8N'+F*5]zPC$olRt?&N Ufh - gCYLgY̶2al-6b~7gGc:j#{b~oށ+5g=0i LE!It?@B j}ˡ8Kbw%Jq,Ldӄ"xi\k«9`l9y,` pqk߳C$`JKxfhz.wyyyfJUIaF$ua4mA i0-|ڂ^pk}_ Z0+$Nƣ3kOq_r;վ>9z90n)Zs;Q wgd-lWfhrvD$@}U&:/ qԈ(|kEFmVgWf}G=]'ͧX%^K+"oFEu˲?q7/1Hpc&5PBM7+|X\.;>=_du "ߎVoGΎ+Lt=WzNef4n;xwW݇tQ} w2Zүٛs§caSnJYU+>mn3743I V_Τ&9IRCgLʁ\##S_":"Ol}n{嵧l+AGVIoYiz\+-dt]nVa = 6pt9lٛlA[eWn@ْpnw~u|s ^}>;΀o8,}^ mEż\9Upt4W7|{mpbcqQƽ@2ħk[2oۻ"3z7^ vQZlZ{͚kOX7gLA͙O6,mX (,ȇ׆G а^7뱎Az|:6؍;)S/ѼR>m#n'jMKgPyF<4BkzA[JѕpJ 3"u ]V;] t Jb!T]ŽH]c+HtEZ<+K=""AH]BWy.žitJ~Lt0 ]ΎH[PKӾ<+Zi5""x BH[v]=C]~ZŻikݴt{v{J݁ԁkz=(?""AH]oBWtEZ*ugHWI$QtZ[w}Fpg+*.GxƧ]DdNR׍&s^}Ϝ2 3X;"B⣡+RW/A:tEZ hNΠv E˲⦦N&LyealD>heYilAJ&K2"+|q+}/+I[\c&wY,V痧_l>z|ڿK}?,(*|7˝|?{w=b~}{_%H{h מreGҊ+,{h?}:Ae7vymŤ^p\r%99W2B**x-m!E[M\m!l2Fn\Q:, ,]3 zo48d9EIih(U|,=^J'Bht:EUfCNAvY+:;A|n/+t}7^C.7?jygB4]ҝɲVgwaf+eZQ;IibSW䝟t}.!CoO~'إʓQR i-M@Sdrt*&YD65ViK覘4CJIp HC"IiE!NRՁ*E\Vᅫ:`@-ࢷ9LJF+lY\jgנTz.Б:TKK/8کئ$͡B)EU8k8_@u:6Gŏ?F, /t<PJc!J54K#d0b!kAdEpABeTMx۪xLQ ѻBǘlc CN`IQ J%¡jWjat'mwx3[,n屰p,=b+g]L/nZ|zչcwE*Bǜ&nb>iaX4rw=mY_!ņ! Ks]bCr*K$;q=RiYI8[4m%f7{8|fG+1=i  <wV=MhXJ$D<~ْu9P/WuiQ4_f5Uh.e9L|_ΰt77.3i^f> 7sqtKT0%e~-G཯3.[ π}^,0(ӡՂ))ӋKJxlyt"o?0ds@=Ҁ8#!iwH"N?a5+LxP3%`{:L3?Hzx2X.&J4aϭՇݫ꾫,i'@mI&qc(MH2c:k@L ܨ|a\bo %n6Nܟ?Jv:gEza#\[Aї~-:\yB 毑ёᷛk/d*BoѠw9z-1oQ^_~qō뮶/VW>(0}:yEfbQ9UŤA_㛽d|K.__ߍ.YSJgھ*:gB[Ȟ݋m/ ! >BɔTƂ+|mTSS8Ae[4rs+ -6811RLy:q$2˴T@\Yj֎<]R-ZRIʥaRk (l\h8cS8L¹ 2d<$mGۂswiK\maqGwom2)=QI$V4K2, J2k!2EIN=9^u NsPj&fn1L\e)3%S "չ6T8MA:`hؽ3 p?/_+t<'ZY;yf*O2XdWue>Ȅz%U⤸OSjaYn{??Tg"[⻟HNѶ67bLڻR %o_4Jϲd?DmM7/|i(0Q>*ɛR*mgWi;y˫? O|?()FYPPS_DnEmy/Ċ"8SXX^goiTE6MJx>-sEi?D? ˊ;}󎊗'wL.F o1>IB}hxi f恽oeXEH@soы}:W*cP OaN^;dx<`h1WC(0*V?>y俞f#e+@~ƣ .0dU:0M߲r>łe dj[|شDlb@ I%68'OZqNy:p?KWҳxF&ԥybU"&jnfW0Z^?"AHˠSRJ8m]Wry19U`qBոj9{?'uiBQ#&Pv_G8n|*],>c;-2Keu3wwnf 2g8hc(.RLOgOW 46+WBj#EUݩI7xM5G nr":ܑ,IҘiՄ%KJIRyf3,ES{+GWicɒ̃a%M#vp,`2fHq\8A,`nAVz5x&<$|F1 e קlM?(y;Rpnӆjh!g-*˼gUa_& WBKk|e92De<Yyb3yyNyf%TInC} ֏JO3>2 %S$I˓-^-|7"G)EޟMgW{f\?:S_~jēov%hB5J0e )¤UłSy7dXj“Ky >@Ka+ jC*X^zSG\.;y-/VK(|wK2n0z[~TePG͇o~,V6*db68D޾Y:(冰6Ft2Bn"jz1[CDQYK$߼~oȆQ)ݯ~:s9sB\oJXQZVX[j:U'7 k_mtp0F'^i'IL|= vqZsk@xo ʍX5V_{>+8B l_(˯wPi!G|rOէV^M6Mx_P P\ÓX\KvPܯ R\21Nq坍<:x .J1UBF WW?X10\K.#7rO_Eb z]학:/ޝ-Z'MrʉZk;dG06eڔȆLVSk7DgMLs~ kۯu1-uOiQ :'EaF1Fbp p"LML&)꼲Xeumrb}iO'b-TD KZZxup:/j"b@f-9ph!\{d#Ä&t'BAhRɇ=jk.s7xk8'h8>nV1iZ][lPFl:^wfte+ڭsJStV[nQ9FVnmd=Gn[_W~c~ B~ '՘Q-8J=΂S-8;]{<\Wמ:hi1ZB>Sݾ Ԧ#+Mtb4.ʿbأd뺴^&x |q:<ˤ9%N/5+a @qKPh`Ypp~5Iq5//sih^5gjE ݛ^+&"owUKodRG؛wWfZ>fJz"3TF&YommNzjbyHִwݣA JJDGc4Vo 'pm02uBtute~} '/jՇ* w]C%6fp큷@PmotzzS%*B҆BWVwe=]!]1 I]`yj ]!ZkNWRQ7R]!`Ã+]cZ&NWRJX\Q+ly8++u(th "J۫c+ )(g ]!C%t(JYˉ 4uh3( =]!]¹ H]!\B+Ou^]!J1ҕ%0]!B7+@k):]!|Ftkb=|UC :0]5CkctЕCO)(8xfp Q]+@)Vc'+F7" B` :ZûNWRVNxS'{KW ]!'U{5t(m|c4f . 꼺BZtut%`DW8P;U"Bݏ]JǮRjjGBWV#+MTe* ]Iц=L]z[) X&h%hd# VGR<˃IB[Rάeѕ`#\AC+D+;Atute$$B6ue"47I<]!JNj$Ff`؏c:=;N _Z0pqKЯb1 MΦЋ/#` s,?zr~JjbEĊ43uL>K$ڙ'7&_/[0!ٻ6+WcArzv33E`Iz%=3MeDRmm$lvsϭ~o˼uy7/by-@U`_us?ZbZ [;3 :iPrF:h֛5ywQCSΗO`Q3"6sӧtg'G6 0S9XaXM;}G_۬BrOuydԯ&k^Ne_ܖf TZ3dbU  YdF._s]}B莱 v_/&-"<5W.&ْS佚6] &7ON||qNV/|*Nj?A|w-D޺JKOY#o~wm#kkR˸otɚդgX>5.Y3,y3n|ųiCM:s@`RgNpR3g].y(iޏxw)D'Ty+y>ۧu#1䰵uçpJ>mh}ɏp>$&툿*h:ػ>Z#4;JI5;wɽ]oMؠagO4̦1ghqy9j"c26bM{qoNK6P9Уs(]LL^Z^Ri1R՗{:l}gi #؜&Π4Njto|TC:7kvpCy>87z\K\wey4}8~5P'qdTm\D1Tl=.?`WQ O"xJ o@!Lzo2n' -' T j Y:UW_ݝԴ'mv15I#IS~"j-5o҇ûnh>׵AM2?`J0PF~*Q:q =PQnDtŀ ]1BW@k<)z ˆ 6+kXtg:MF85&b6M3h +Ҏi Uj,th9vbNiLtŀGt1uv,th?>ϧ+ t3]}r&h1vh4tphnZCQn: 1Rc+Fk g:A5 5R3վZa,tuo#+FϷ~9tt}~Cnp݁ Rґ]*j_ד^n4tAQj}+t4"`5"\}pvCQ+ó{ԐzԈ:r<0 w ܸќNG")GDW ؚu֑ptt*}Dt4;I-Fk^]1J>Е۹^ ^%ƷkuKwk{zkG0wȖb{Zm{}\/ ?&`v4tp ]1Z#g:ARhU]1`7bޏBc+FILW'HWZXo +XZst(Y]"]܈5<+F 2`tute3?*tŀx pXjt(Qgvbڞꋡ+t!nn8 ;U@j7v+:վ'QFDWhӎp}nhu!n(: ҕTV~Eb:;zu(l3]}R[3&`'h:wt(; ҕVq.VB ]1Z} f${rX7`G66_O~]^De[кbo2p㏷A^bA9rPmőս+޻#~__&m j݇1]7o_ BῺk n/Y[醶DI-1g+xݱM̳=G[+~ۇށrG)Cɺ>nnmN^#Sk➬F3w꾿o なFnGd#PQA^Hޣ|0V=ғ8 _6ρMeV'yt'ޯ7&G^A=ݾx]|NK}:L 1l~?4U 5JQ)i|d*Pl b>UJRYfJ#yo1_$>h{hb^d૿nM1 8z>]\l-AWߒnD (YWV' E~_+4r]{ߗL1iU#bVJ"SŔEɦa!ݜ(6} #gl2Ra/4l.EDA@@8`q&D"Z25 "{IHh]VD'O/T2HJ(5RPl)I%i{´-w |+HXj͘\T5#E6SJPR:L%-$BK!c[S5HfIi LQ)JӤ΢rQXW3 <ф`Aݭ^Y3d*BBP*0v 0)OJKh!dR0͐)W8YeT,yrh.g˜=Z*,D Ј "=IM.o.cYxc&H[@!!X2hOIlo%Ĺ]ֆyqVer,1ڜ1$G*h*xѬɵ@)ޙg@x:P|`k BDUk09GYG("Ҳ}hZK#]0C6eJHEi}!!>)e-UHTD@ ^<jYZ=ePKф bSLc&E K/X jY$JDuX -xsD u;a&)CLeHzm`!jP[0g"B m>p h-䩇PmgUWHrQZ Tbd  EIQn±D6r lN}`d XqPVӲ3U8Й V4lZTRZJ"*P2#޸PK JGW F"@آ̹, R N-`&7SP*XBg 2x0f [j CY'tP:R_Ej*AiJTTKUɺjsN ufź. `W r PThd_ f[ѦœE@7^7/(h :JA!NuBC2]weJsRQJ+$}Odk L!R)>UCB¾:h"ʽ pB)9$Y* 5N }u5ftc|[4_Ƞ֙D9_vnѮ3fM'cEe҉8! `ǀi;[oLŲΧ<97E˻^p %#Ψ LМ'hX 5BW /pa YxV2ztrt9|T \ cuLA-EMp`C:!.hjIR|"2LԴQyeF}pJ$utvѰ<=͋`䢃RVe6< om UXtunP% 9UQϳ5 VwFT-xa[M55D v " ״ynK~OGUD|[̜`I&dOBF/Q"GA])1r9hP E߅^B ![R45<&AbDG!Q֎a:Y@'^/B>JY`Ռ&x,jZr-iox wzL"<)YE cW,D J3%jYDi9DHty Z{ho,:"m,jXt*üODh*!ΒMډ@J6@3Z欬ipj@g# oj T>VD鹿.© 2hԿA IX  S/Vp4q2å M[FU|,g Hq""q¹@~KSZyeb@- +gbauPP"1 |0 ϒC\<%1:pU t.#2]Rc9"Nb 9Y"ߘ%1c47%bOͿcu@,OȈ:%IeD D4!~sRDl"׻%3Q_|Uo [2AB90"m0%P$j f-p2gB kLCRS^OS)ptE.!JQZcO ,LHKZ]la$9sKo:<@BA`!*0UiR 1oWHw8XƄK \5hҦzDGXE1iR0mx4hlBh7xl@;H9;ߗ~e afIy(e AsD79_ xsѨq)0fZ2$mCޥE.h=;\ sF|r=_"W3c<+ t0Adhi: d'"qbٲ 8rf2KM+go@z`w+ⶏN&\֙MfI>H=AG6->28.<Aӑ݆jGBѝ"f~D}v)l,"<3o]߻IߨaM՟7wag\(|ʂO^+f.`MBChLg'b$olջ|ꧺ:B < ^,tz۷L5 "á0.Ju3'VOWOߚ=:R$>%G.88NfR ^U[Q ~>9ۦ8G(lJ:,Ҿ6>Ƃ02&5qGX",yKXJ5^=(h SI RAS~{6ǃ{/=Kp`[[[e܇IC{gi^z:ٱu-̵zSA1_6II~bt%O ~3AZـ ɸ;3I+a>l}N;ɂ7vBO9fz>]ܻy0_ܼ>ovUdNi0oڷCWf ^o3i_ Zjuk*2<|BX\7%ZVPPüJb #]QLƗk۔Oղ:a?9ld?v_`'n/[j⍇y9z:/]3tkwtWW=weWuŽ拀=OID'4aЙP_<:j6zTJB(,<{>/|`C?n'NWpz{,^kc񠎸bu~f[/i}-Uc_OM6ww?Až;3;r^ x.C3:*\/AU5!X^yÊ('\U-Rx㬳F|xOSW_E̙c3lt%xJ7a|}=o@t8|V ss)Fϋb2Y?KB4v+MH\]?b:C/zoGx_-cFC2r~eyx||tƥӑq/6T?FqkWpRm޽,^=he"0KxYn|R[|uЖ[mN8G?7ʏM|L\&8sQJː;”Q |rJ؟A35SkUpoF΍%@O1 -@#P_P}qe"JۘţGzʊL@{]gs_2]1W-n* hʕn{~gߺ[p-˹\ nBn9ZEo{b̍p 5nC_oDˮkyL_̴gg3-46ٰVbYozY·nf8y̱2irea*D+DΥ&`Com"iw0ahI##pS㌻,eJIJ,L"s3VZTzO=Xhh$dUH֬]\ Ⱦ+"a]$l=u"&jnv]^v=Wz]zzڞx|o5̔* Gzؤg1Ӳtv0;P`sq zU=_"2b<8*dhP|JJ1DN88[tHFL|.I2@i@QG=dgИ@n?w! 񾛄HaiZ1\8Tf?8UgPΓE~`wXS쟛=rt+W\p +sF.OVOziSm^9j:gVsizgY\ޕ0mk뿂:&e.;znfK< ʼHj))ٱyƶ8`:ڵ@/')+}& 2-nEN?U)m9/T4 |5:jAxwrlާ<l7 <,Gyܛk>zV`aѳEgãLI2+xd4N:ytX^ypYC$51T3j3nl d{[~<  e̠bPâv7;CwOȘCwf]Øe; fXE^DOK)) |ڞNd΀D7:v(|h(L89'o.3SAϋ>L y\(L[Bpcfc>ʘP&Gd5uE˄ ~(У !wBF7VFߝ.2ܥ=nYX?M,6p00RƼ~Ȥ K zHp+cc+NcrC0fgE 9ԃNor|O X LIW=s*r Q'!Ab=gX&J}09|ɤB3,sNHf{K2"ɔȣTYʱd T>pI8f x(,|9|) GlE9LgYHd+THG] cjFI0MFUX0> r{gә]m: Sgb{n w,T"58`fDNyAE7%i@Ob c43,^wxbZݵ 92YqZ6(3e;ǁ9R`<6-|JeZi7fc9ɋBuYv* StB  W`DъIů'4Yp_f\B[/botG ,;orj}VT`Qxj~dӨkԜ?|_A`yrڥ)1)IՎ^l\,cAMp9Cp=RSP(`:mu2&&Ԯ̋&qLYL:)v4m#QYjIkj"f`w>=pS_SO#ڀW|Δ#M)̀I&~678(E6i7ezŶkyRlB/ˆNH?P+r);܀F n2HeJDPq0B`iAe2&Q>.# $.'&k~T1d I?~:W+^{5g3g+HGQB7]ߠ`бVI$#/{\a5B<;[5Uh:dZx-pA\H};ye摉J ǎ87\E5`ۅۂ35لJz{{GT@Elzw zF:"5 >͆HKvdFX,9;K52\6w;빰?V萿׋L to&q|!~I+ZnĊ?o5:v v9v u9v _@B*ةo)bwgtv-* WA %=Ʃ\}Ǡ aF7tn" rMkO7ȵlo/:ӌTTg{(D> b.UF}\CgÇ :AcgݮӡfL^$Q"_g!<+mmHQqPťf`, fv(zV7kw\MrR3u}ӕr@R@b4rOrDr WM94Ewra '.ʿM !ʞUG$6٠p\k :;c^ͼ= p7(5*K@39A`khfߡ{l@F&n72 ?}&.{/eM]uzK{Nn_7h KChI~"N'K2ȯYt^|/nmm0ǰ@2o-}WZc#fJn/i1g'8ÚZx%94c~a<X~xv`XoTY f]JF/&1_6cJ#576l=+ؾaxN_}֍M8N_m59yf1`6zk6[p$:57,To1+ܱ(u%[L/YЍK\z|8xagTg:>:X7U}}h_-[MOo8}=ӱ> {ߥ )R:"HmBtI8)nN 04O1M2f6GRuJ$Բ`e1Mq@_5!z}wl2]qOfB a?/7pՁ9Asɸ yx6I|eE3qwA thD!ㄑm o{I>`kue^N"fJ1H$Ԥ?C'`'=]*AX.]ܱ?ҭ}|2̲=ԺSq݉O=a!G^eo>Wn\V \Iغ}-~sxtxؙ5 :F9^uupI<) Ӱ~'poGT<,VkDI^/>=.wDtddir~֏HNz* Q4O#cj =ܪU u7>GT2>DsE]R[)av9G2 ,[YT[Guz^|P @G9*V $3h= 꿔R[} ^^iyT$)+׏c*U7.1󣚥O?̡h_ d6y<#HSP_nE JGէgK3fz|@F7$& r6uPiawc@ d1r|c+qxY"Q`L\Дk[̠!5Q|?z}niDVz#BH^zebǬoGw[l|42K{u|L?-uYMBf )7ЅYvl7Ke] ı{Ӛ8؊/gP&P8VK.Lsd`]g]szRwםLBszP\uw\%(;wkD=L)ܮ⩧Qyh}+(:s2&{ M—SE"WW5 jS 9eۦmKѳ*&/A=m,z5Yw(ߨ1HC_ [#M8Ru7Ϻ]݊VݿϼNo 3oF|1bn z/b Wx8RR$巣j/螑drJm?IfI,.[ 6*PEԅ[:x uo dw/_Z%tx}5:KeYo<*Oշv$;V_S`ogi[Z\jmcBQdwoojg 8֖P=K258!=pUOzbgH/:m/EmNHL!Ҹ0~':wU?iKH->6K4~RVLDUָ9n΀w jj,o[[NC_^w6C4Z7E[|FͯB>@.gPSf3g*˾A]y^g\ʋ^bp xKQR~'wL*E40ZL~sJk9ipDw`6q9]s 7>/qv?jUO&$3lPzٻ,UfwGgw'l0A N[V[Rw,IQlQ-j@lGU>A&@1CvX)7 -dr'nwإUǠ=T:b;xTJeF89e4PY{:@y|uS\ +[K8ߡB1\S! 5L磽8ffjZ喙W+lruy APY-'c#K.@py IYzLqڱQÈ0Q&aJ;f2Rʃp-1Kȳ֢RGZ)?ګtP̩VhzT˩zvtYZiF=ԧL#t\n7qZxTttksizeYVZ4|&0Os/E=ai 0FeH͍#,Ƀ#:JƼ(@1s`@5Z YӺfTu;"E Rrȼ>̰4N+`$d#ǜyqk\eҁYkڅ“脔DD데d\`9(nq\xc$hpkXq@G!7PTSFX ̰Ď (%!C5p#Uam l:@;k10RV kRpT8oϪ s#Iɳ|gwfio|;K1hc6H%a8SMwb\@N'En҅?呚5Ojt,y"K~Vb~sK"(Ҍ"V]/I1[}4}HMlA&y1ϥ\<- Z-']:qbI~ f[^f@zeДqJhvg\ &z>SRٕ`^O]>ek,7\׳zч|VF9WKf6 "#<ʢGyy&ZOwIF菈_QzsҰ4@)JeXO pD0;tA J1uf6 7l"\물W  K! H ALH+^%/ύRU+_*i/ܙ\OfuzU`Cux䢲/iL`UD+,xK4R?ܧ|y)WtL,sc/ɠiA}'u*9gi>QrV(.f3F8-WۼhqV鞴IUz]G&QMb14=咼,̼Po|ᱼ|&`WAJ8GJQ&߶#ǔo06#DSRHm'KD, A@Y )J!@A<;чFFf葖Aʊ/qq5')N3qqlIp8qƈS#QFFfa`7/AYHzkkG/XqDQg7h7.?ɾ%Q,$K0jkX K'DB VD'ӎc+LG'nhL?QJr.Xv^>rQL2J͘"GpduaL[/u0L`d7@t3vgUW!bi) RgKй ғJ@JﷵAw38h(mOxk#ڑ{T@}bk{\ed+ VՍjjXw ^}}/j+m6 .j1èo"B(F'g,RYAPE!" lqڨR"&QW=̊*KRYmF" -\vIFO[n|Rh|}VxfLeAHRVa S띱Vc&ye4zl5ͭ>ͳ4w,gcnJe7!IJV5jxT~s~`.0 $- T)p,VJ\tܪΚ|ZU42a2'ZK(Ir yIEt$LEt!"'U2`!e_E.qVqLe r]Ad#maB, !%fυ+: W rTUh\VE1|PYmXSg˔^JJ WU,HKn΅4#[$U @ҌgJ)L`  EՌcK pHȻ@Jٗ۰^wIs kE =8|^#p+)m-}L擈#,Xپ4U'8՝׺8XO'ͺ+N⬥*0'>-e%.q e!Vf]mV؇"٬g[}~aY/ IU3^vUO/Ow D~w2c) lx)E7-U;5B=Bq$T.E'C&`Tg)٬ wH=i=Z8 ?2_?t"gh<Ⅾc,Yydc"")qJ7DVŜ(a`h,l;;F|ۇf;ndGzZyt٣o_Wl!G!:+t!r0tK#Os 2_qCV<eG ֓˳R)nq}=8<@Y̘C:,\+ɗ2o%O4A0ɵ r B($<3ck1RFXp4f'1Žz{nANZyg?>_$ˮe~rTM~Ě-1MX&pާo}_HdI[pO`F,+*+UțWÕc1\p3~i ?$Էimp~ϻ.FIq t9́ DadPbY s52}90^.M-p!i.tB2^SUcB\M녎{K .sR-j}q)MFEWr`PnOoglFonTQxk9~#їj.؜.˫?qe#|O䳂Zwv^YEHF! ƘY~I!X}N3}x#Mgo0"1} !?w,n{_kFGS-QVWstMB\J' dCB{y{CҌO|`S RΊȍFt՛ѵDJ'0)$ZA۠FT$i2#$ s^GN\ [[؈R=M.iqalLs)엠_Fޒ̞:v)k-^sZ:Yޥ4}3Lλ\m*[,RZ=ŹHRHGk!qΥAb+4qnV|xy<٥4JE#D"XF Z`%Hs4ӃW>3(* %B# 0p%B°D)A+ H!UR aEFӞt䳜~lS K]X %$ y Y` ~rpJ3nR0c ,' </u9}d"R2XKQeHR kQʛp4"FjX$rOyG:xj[xBD>Lk@)9#0ӎZ2m]T"[Ap"8fa~hoJM8M?쇍G7wyCKFSdoicY:c+#4H9QX, Y&T[R` _Dv? YJ]>&F"|؃jJ%wc3UDZi+ 40^&[{TXQD7 y.wȞC|AYR\_ZNJCH?,>Ym;v-x<ܐHm!u9+ 9 ȣ=ֆl_1Y7-F.Հ)7[I9|!˚R \΋Ow=+'"Pr30ўyىzk}?G(I>O/a<x\m}¸rϦָ˜i}$i9!lˆ^FcD2+냉KM-a$EV 1aROd9۝>Fv8d׳˻'/q ;wfg`zC~:0zIӫM4*kc&PfTetۨQ1,%! j Co1gNCܼQ;[B6A*uE } [%>&+[u2l}3qWfE y_S౥Ԕ0H@ɲ> `Yb`U2(i6g_fp^ol랩zirjHLذdK ^w-mI.qDcmbI-IW=÷9DyĖS]U]_uU pI{S'WJi}o\J{ڃZBiQi]},jx4 t0okO!~7fwt6Pk- +#Yڕ~b^m\>J71Aʂ/ Ra,]Cïh|a ?Հ 2Mj浦g'D+*]>l'$Qj]I SªXnwqz) =3z_66 +z3̌t/Fs{ )z#^67=VЦͼdڬGL]%c#$, x$2Ɏjr~'UFyڍf32R쬔H=w=^֫;Gu* p=]x)xq4aUkT FB0U]u&!Wl.:*GVmG7/R tv?tsj y sZD)3ύ%9.uQWÛS oBe}vl6'1[Ym-jUѓ}(F9DŽRXW6ȹQjY:@# Ƹ#aReZ )$ܺVբ=6Q|wHOW-;̆,\}_.B;sibzũY6v;{Ģ/}SDs u2jt-jtV/0ס0rDg%~g 3 \߽r,gLT++Kq*شQ1ttµu&\zRe]h++U׬Po>ԕtLT~~6DO5QxRQ;:;0zsha5d5¬?zR)~g?.C0!B*a S띱Vc&ye4zl5ͭU'*>~\f7UVbƢzvV qVႣgB,V``]LMw.E ODކȥ}Ƕ';2SOq@͗-otrj97cf }[ē[_c }^k5[7h|Eۀu"zoq fgҕUB8%yOGдvȚ2cD}"V;B2]r)x/``H\U'it8bΰ}:Ӝo '䃂@P )-MU݌bѽ ͼk R:5!*(F]xPGJ'0\0PҠmPV# $0YfnIRLj7Cҡs JBw_^ިϙ6^`}iF|m֔DtnQ>'@tkuCXJ⨡l$S`Sj h1MnFD b\)AkɭLrsjM{|\ّ4JEGD:.@ JZ-ӑr8CK{@$N[lBԄTa>1fz PyX|ˎI5L!diȏ\b=*(Ih4 t5H`ߦVeaI}o{>y8R(VrvxoLHI HX+e q۶rLsCA)Dv<>wl$~ijAbFTfxp6i-Iv*;1f M>؈.OwvV_-RWAF=iR;_'Nu߷'䭕Bj=X"bY$h Jmd`':EW:ձ' FC#V"^[@hm00,0i(-0F222٨8Tx⑔1)3N!(s \~2? F'!R_V$4Nק ؔŜ?}RFG>yW~2W,H~ ;ݛVHWާ7ReɋUdcR07 d68Pzl,֨)rFGO(48'PX"@I S(*7/btg\hXX T+@^+kV6yST~`8QûYY_u۽R76W''UXSrc9̀K#R/kPl?7ztg{۠8V"^ <^*BLrJ͙"ǜ #:bD[Ge[3Q}!"`k U cJ#68R$#Z{1[,vlAer;NaOplf&_9o+&-^lkܶxl]roi{:gI># P`Y5 wԙiaGM\q˒ߜ[uL*|4ػੜc̱߽l[~i<>:K0LSD$AԼ! x摂 15'<y,e [nu '1beP28-f9d%¬3|%gS4ao7ɄԓIv gv/63+Sμ,L:LZI޼ X%Tu93#Tуw> o<5,wٿwY _/;[h߲<~ß㝌n03?gGnv݄.:-Xd>7{)vF_ۿಢjba¬, lj )==~yٯǧoO^:մB7/v>7Vşp|/~<:%bf6rX8 aIp0Fua#!=m -%/߽[2#QHY5 0k5f,`ZFL&Z,)EP/'o=/O+ df1JkOƷ?KGȪ} qxt@0 `K_{_5-+[DJS l* VH(g,.V4EQ.H_>DZf:ܯA@(3@"LaS.<c8a!8@(Õ>8jԊ;inךZ}y^8lgbGG@" of}* , aaP Gq (|ɗ_țOAxSS|q%@y!S%>Q %s 5¸g7OjŁH=xqĀ[)$!l4I08<"a )?Pr;a]G՟΃ ^nc|xznkƧ[Mv{\OG3iNe7}57/~뿯4;i_Gk'Ȟ iWjE|n*Zdq}j~B94vܚ߷:}e lÝ1wؔ{zlPvꏎ]է:c"n\=2МjkP!g|ߣ$nҫO2:BֽDᗫhD=&zo@c(lҞn:iGZݪNNc~L 5[/;^nYodtyCU{tk V(&{7O^|U:l|oMWS"t1fix޵j[swƕ]3lKXV0g:1& u!ZxP}E&1|sR9ҁG}VV *`Oz7a.&%ٚ'Lc&a/qC3\\֭Sƙ䈤pOpco}^@o yZKC2·, JB9SR@ߛ7Pl|ӐmN!piPpq0 _"w0Dvm'!CϯE@8*PЈ 9! (-a` #vW% &CXHQ%gKyW2\ `3]{|0st{C107QfW43 ׮*>< TXgp1(LVr/߿8,YR!Ys< W;6KނoB}ևkTٓ C&זd6a s*T΋+!7Y{+C)]+Jט"lks<w>IE[wyP%O~N9PB3RQEH=K)c!{X[`,%R)iZ NJgXRo|^n+8xK|;TeY}Jƣ܎UR%7q>cdgϧ.L`@ 0M2g{}aG0aPha2o,%{"ѯ %0,R`le_I!t^YKix#F'ֆxlwns3wPёG T#%q\}HعR!n^s"5 K8Τ 8~\Vf> NFwӵik|i潼R<Im^<2k^p/eI,B6e2چL$lPQkz%%k¾艙P^ o|OlOz.e K/cXUQư1,e K987_C?Q.Z\z!W l%\/%M 2!+5k)ɥX.#0t}cO1"yF L >\"ri⾼dUO,6|egt97}g ۦ>;?ӰՎܳZ Ow/ӠESJ:,DyޘE *$ r(dY3fվje{l^ףOɇ}H͊ɪ|T f_3d~oG;2YrN:~qPAuGÞpf/ ;A09u4/9^:!1v끞4lI01 Wt꧙ٗAsc.X?@rZv^=ۭs\꺻?5vA<5 w0VeK,P&.! ڱzj/,%8&43%l~?mUzZj2FXIOpGyR;4tƈlV_^{,| + xqibX0,)֒1b$׎IyƢjr![QU JxgyL[yՁQ86<{͓FkhGVg f©m>8ڇcԺ8?%vpxT37[7kf9$mah+ȊW{f5ǂӇ :#X=CôF 6љ M} 8tLZp:ٳOa|mV? LB̋ZO R:ztkDm,iVAn Ge=4:|NA_ 4AFݽAhyTIbf~e4P[q3h9uUNE0GlF a>j9_//&{=%p"֘j- 5"g޻~ĭj(N|``oGtJm pm^suqع'y<-۩_ Mv~%-Ý!,!z `' |ML1)2Zbjj9#DD1S.(__AB(w]KL8pQPEx*EQw4"\E߫^K ]ިZ]h0/ {ZwqY0s(Q\R.{S)2<\R.O){ !\:F<@Omb{z:pDrޱsDM*\I q;ԁだQ턷qva'հvVcI|UK\q*W6Q.hޛ@{&(fafxcx3#7r3/, i8íJ t`QyMɸeғv+oPlzY9\׿Xy_nܝq|%ҳABv)] O9 $gLX.1Hmh' &j}ONK'S0oǗ}(i" PO9! 鴃c6rjI[acŐN 3w_aiزB&"a_)#a_^;v_a~oySD:&Fn&_ O|]NDgb<`U#GH fXLy^G46;zܔĮ: ()>[ErOviPxp"@Q5o#BzuHy#5'VS|8b! )$! 5tHȩ@.'QXы)Hr@~8ISYO½2ugKɋL[ _:#1J0*B'Nk\1c`-1(rn>Rh:bpz]xQsʋ.U-D%M@|XExEpsѲxj^2E N|_@? b" #Nk]D\PBj+ XI4<ʉ`+}RWI&2= "< }p]?h@%K?%bg&g!IFCw{phw &/nm9"j#㗟> meNFrH9A00&<6LKN?v?YqI¾Bn`+$PXP*}l TJ? K{!ΕQ.%0M= .lWhNC\ҀVC`\L}-}UIIc@.0vA| b}XR88E'7~\ڂX>;.;.f;;'5`?dlLPf=b*5Q. VɈF"@i**@o:(G!cGjcio#<1PD܉K-cW}3\{h\x9^e(P%n/2q)"!\&"!,"@)iΞA~N+Em#I]`FKrlIva3MjIJ7!ER3XR ؖ8Þz:7uٔ:J) 4UjE$.X5h #k2V<t&!E5:AӁ4Ir[򖢾>p,6cw:@,Nۥ#%H"  a-Cǂ<Ó#FG*7)R|}un#F;]{H0,L rH]XYL^jʈڦ1hքCʘW7%yFE׻S7@ wcyM^c!S'*ğ_"<1VYEЩp6ʌ :ڀmT ƼDL"\k؂~6Cf}mx糆VkJm #LMg3ále%D_s>'{omZi󿆬p-twhpi%f+M\62a[:4euHW>۵7ݐάTi/ht}؟ٔ{%X^Z޸s<$ DZeNխDX{JuϻɰfٳkM$t+u{ANY6 T-z7q|SS]f MKnӢ JтF@¥pl4f֮ 7cOo2Kr 8M(rneZG`;X+\$JK\橉zv-fTѭ+m6Hu$u~tmv>Ct͜Njin%Iv)mS"!&Q4FXnӅSHc DB降Г]u܇"L}U$Tr O ZFNP{3(.`89K|OXG"\J =Fc*5( ;a8zU:"yEF0b|5M(q'wr_?61ο,ge*{Bif5MU>ч%՝X~<)/w!"(bϫv޺ݯ<0GݿV"fV:.\:*$SEmemQPI-W-"ɯ`A'aB]h)9 0{M݂ P))ōqKWM۲`_h[48V-'MW#\/r9̑htmfU8YvU! LGUGsHX>oq 'd _.)z]J *bE e&۽bp])8t{96o/b㈡Ũ~R던n 6Md_Q;GJQ&߶'ggDGꉦykDOX8 ZRkS(J o LOmH]Yg[`°  GIDi1dAt=`#;{qUd{9v^:4p91DiBe0cX+)$L;gI( &uDhQh#:#G9EU _6BHr1 v a#A2?9HyJx-cU6$S;E^@~#ko yn,<LFJ;S >iiߨ%:+yӁIyо:ON@2$b E/ ɉQkD[L=6ɓ-er~F|-c֜N>JU&' _> PKϵ/g"}M7Ѫt˻7?'4qnux?Y~JE]}?<iz*;%,֋*BSF68к\&\\Vw5?JfT''֤[P"T zX[#v5{]~>*\eGU'WZ]ݔ;tQKx臟~#ln*-Elgy;i}]@mG57xc:r~L+02AS5>F}ayR|"EY*lf4zF W;|s6Dyda7d{T>4 S JUϻY`;?-JU }1)DD܈=Ky}(xH&)X ר]QQ8 ҒPNbXXH>u_`wPɀzy5GJGgHqW>RL[)Q[M /cL8dtuDVmh)^]옲weJE̓_Pdi2^,v{ zQk %I/P!`V0/2[DJGt`ZD"rRv ¬*Y1 )f@e r6XGm^\,cc/cZad@Pg($^2N Y$53Gc}`ivL aGEX}9Nyq.Lws\x.L S”0e.L S”0e.L S”@L!M}^Yl L:3˥If_K^/':\oFe:6иUQy'X˔@Rt!eQf߮nZ"9?!=K?*.>I!>l$eNǘ y$⧛0+Vgg=9tsRy2,y&B'b+kggx) k}eiJ;Ӌ->oR!ϋ'`o=9'W}=R<+ 'k'[-z> mŕxJvՎϡ f:NyQĦ q7l1'wo[Svzpglӣ; g?E2Dz~5nU*=@?)1ښMgMEgZ?xq|' #z!Uxd!R&R/5eDDL ` Xy$RD{;i3_>\j>Хu08Uݾ򳴵AIqv0,&,q%T8 ySJKI (Q"!Ŝ(az)o ]w.ڏul ][[f<[ޣ7hKMZg+X7lpYdf`f`>0w,\]<32(N {PY^ZDDzC%o<;7xն0](H{\;ߗFfu|9 Lu]1vMs7] ||WCy\ 8n꣟*-)MIG;YTbRJ|4-TicN胝dXUƩ }@y1,jwޞ'=F_._Fs [r.SB|֙v/snѽv|/O/"b,J?I!Xۃ%ȯ}?yVGX?G}l_38j ok9b q)HBF9$QabUP S =U 嬈hD V:d ႉh m!M$i 3ˌ3+=sw:T6Di}O1I_LKHX_z=L֔n 7Z* v2.?{ K$/L^67uC+/>:"^` bJ#{QVPSJ:SX4/:(&RN3r{C#8BSAQ#D"XFz AIw:R<1Kw03Am- D)aUΑC*=㬷tY_Vn=N.` A^)aBp,!"11G`Zyƭ^fL >!pRV3˿"OZtX-Cr/UD)oft(E԰H(6jGrl'6HJ *Lk`Q0sF a VȢwQ9 okEqmD}h݇ZԬ;LI0u#d?8_%XqPS 2m,6R@r,Ape)< CQ!pjB (\F[DQ \SfȬ V_{f:^>ofbm&PJzVc67$ËLM;KnӢ Jтgʛ6WAL nxgggq%NR/Q CnDZȦTEpt7{h,IBX~{sk̻)'w7?=zoK&iK.3"J4XsN)8 %96FXp v%rmEΗo|6㓐O Gu}dV1q{$og7m#a ۡ7rI}#| JDϒ Od\X J@AvrF #F'$;ĝt(uҟ)ة37؝n:D`>nt.F÷c`C ^e+uOOh v p(RfZǑB/I,Xo[>zѠ~G5 @( x@^?w?[+ 7-q`'-k<.ŷET;/Fat0Lmxg6eeAӤՁeI6.g7+=NOj<.:/lNsvGeq_A`gK˺)gHRJxu:VV-0/nm|.Sٔ7ޞppᕮ_vq0Cm= h. sN0YIHV\[ <KL#BFQbh!y"ʺc ynL"w9Nĉ)8U/D˫PiErg",2#$wqA(q"v^ MA(`:!)+Rtpj{Mcߚ K|$CD1^xr=_ʐeH<-gEH`a~WtY(_6;UJv|&35`& q ]LEqV49%]\9ۼ}l42-?ml*|=jBHԫ$I',ZXl9dѢGJoa_A! ugmL=X䛼bmCm_@YEI"N^[<rK`FjeUY$X7Lp / EuRUbĄ6YԴSL^fl@Ea 6%1|HI)a!Ivr&8!]ŒNҋ?Ri}Wk4*2o7 ̄~pQ+оu0F@(yZpX; JH~f?ODayGU8jj-*[4ʑ٦A侪 Wep6g#xh>Nd_ EOs28AlGxZT[dCNp#ˊ4fWnQTwAt]>d`Jq'4+hq}bẽ|~SHͼL1II e]y6OFc :'+3gBaD {@)D+ qRy=:cD?t4ڀp&"w]qy Ck0 biKJu5Bv]KWo|t1%gm~y+ %S..E ё|U:}ԓ(FmfDTb\?(bQsARi9-3\" !i8jÇLz!}_ q2'BꓟIRAI~K  UmVmb.:aW_}`F_>.'n ;o[I%w2IA{^ɶp{&t>p+T9 Sn bm;~ оGXu`=|*k}#RXtD ftvpЖ͵j獙Y)p*TRxSŔuuw*HhMDf8pgH/(߬QL:,N1XiyvvOFv]nJ-(*pҊFv;Ak oJ"+~Fa#9-NF4㌥TyO"4+y;;8gyp~q6Ftk| Zis`$L;jkL2qD~ޟkʹu#![FR;0[>IBuueNE\RlN9\Js,u<͜i{<~3t4 2aWOQN xEx3,@pl<ݒoG~|j|xDM=g94)"b(DG8ϜS"G$wHF+3\ۈGn' P>]9|3I[77JfBp ߜ"0"%D܆9#D&mrӍf^ir2 аk)^ztLIe\PQT"NPE2XI= ⥵g1D D-6c;kZvցR@NiDUDP㌇= wn |N:K5wR 2s~B!垆[Sj3fTc$6˾E-Ⱥ.XO: $D:28JBy͉Te3$w2va]/`V\?Ikt4Y@_犯'!c~vsYUmuGbۑ]Ī慔ICm`oSfyKf*-eJiRN ?Fd}_CW7v;$6X䍾|W>]gIY2.?Ygdag/.G;_⵸͓PieW❫[>GjV6ˡ)BWW= P&_nYoO:|'0 @VȤQ>yt|˟y>CO5`10]t>.HOheh@sQl%igqd<#0 x *:ֆGt40q BVp! =B&`CwX@]ǑGEY ,uIm5HWۏN1n:hM:~6eeAӤ!n^KLIҘ\P'%isߎ>ܝ}MJ"p)dF`McSJ2*hj*L5/\Pɒ/\yQn ‡qlnY%FUbvt`[p}`V[φsHnR™;.n B}Q m2.CnB+fۈm[?0- 53J%22^S2,7#;_m|!1$hNu g5`a 3Ai PAzg 5E5v;ƿrBqǭWq V}z/P?7ͽ!؆BïH$UKWaWoBE[>6GQ<^z%pDW2ph .vUQK,[F)*Og;og{{{{ t8N^i΅əqKS8\3ĝFim(G:CԹ5?ǔ 0H%IOҊ߷ARfY+9gt(Nܽ;_:lJar\WVS bKF.HK$b}7ݒV [>JXxg)?_aک$Y׿&O~.9{eoO0B< ?H.߷yjۿ$iގ݃ÀR=O[9SOtv! 5,fФ="_M_|4Kkr݁M l^;j6CT)'.Isזŭ +P+|u>P,hsP{$?ܾ@߻J:MΛ㻋02dihK˻n,3{COW3xۿӳrWǿU :N/08OK ?S'_UAOpgËlyٰŗvH/Bxn|8u%~V|{7qE>[x9OY3(w OJ|dmKXrm`b26\,`dqx5<2\l{{eG_/ CWjg{1^_ dz0χq3fRG0`/__wE% l\wm_Fk}cnb!SZ,|')6M(J#›su9ٻqW<8H0; -g!tȖ 8',{%qBg"Kxt;IOW:_,%qGA[_ p6~vp>rd F~xڄӡu֍sç+K&c.*ύ(Zb|B/-[< ]}j}3{\V.#6{{J]].ũs#6MƋ`@5s֬{r\Lj7;|e}z }p*ܷ }p*ܷ }p*w+P[YuqGIq"ԓv` G(}ߣܲCVHI0C9߭ YN;t:=t)GzxU[K^"Vq7tf$,H_޴އVgg}wH67F`#Du6sf^m}x{XM?N6:Vʽ`5?K^ԟg jfLkwmS+`= Ϳ[N k!$]a0-: <:!YSN .OAK098qfĸu7}jce ^,\+O@N?(<7 p² 0 =WXLZ[srZgR5 (C?+<pʥp_  % c?Xry"~( `E̅j&j4rauMA]N~Rr"ywx?6}/'"4J}U{w\~oy r|U"@g{2m$bkX:;e[]agZ&FkwS.7J 2RϾ9<-g[E) JqĩYZ 0Bj ^08RNR9ym\ Oih}_TmIPOar\dWd33?*}&]$gh8zF_*ŅVWOG `+hK_Q ^X["3j5~NL)FUHz"U)׉SDܸEm[ղQ:7ocN=jT KgOC+2N L@<k5[K KF8"bbÄQlbw@AuX) ~9,vd ԇB^`;Zܱ$mI}A܈$C\fZSxͶ졓&QF4B(`\Pl*oQ8]xf(dLvgP~iقZ/Uf HgegZ;oZcͼujM6ͭUim79n>K' Q䧛 i7`*ȴhyhم>1Ё7n+楸 mM$u>vLUc{%8A7MLԽyO#6hhV|~n*”% Px{|0ԜK)1j6p)HXKC؉]a`L(`^s=bvg?xw`QjD&} b?03̾64>'ݮ_*ǟ  8l{[ov_vm:'Ǡ0k#n˛ajmkSgg5ѬFCwWi Ok1Xڛ׍Zec?kO/ڭnڗB, n i?gvԪIPʟg#5uF7%3K/?5\NOHN\d}j*YcV(t5dCQ$DZ|ȖEi8tڂ?z[0dWRF 08`=Bk$0%3 sj IF2lBg:tOnx>XU^4T*jPU!\KZr 4/*ZxާKiJ3N6v \ ydڍCey߆nD DnSmx.c[ƯLuz HMc<q߂"@^ۃh7cwBsrFhS۫e-HxUb5_Z >g]ڱe)Tdc]>klEg)гZ1( ?KA䑜iNS X|'LDž@dtzhMNs 2Z,ИCx{d4%0HѨ!z~q|S)cA%19쵇^j0% z '-Bpj}=p6g%ܲEHVW-z?+J 7,;<9GkSm\mټ; _,{G@o(%~;/\I2籧D*_~?/i@z_SOm6Z"=78&X d@/"" mǎ-%:7akrR1Y1]>40vEg5ws~j9;NW9cGz~t]qD?M6TsujL7iӧOkrȡ77U(p\~LvcӉbbr7 CnzSF`dn#[+P9Od[5$nwgi{mS;j,a6d70߾K$wPң <"D\my Ԅzv0&Wl%Q'N@qur!d8(6}쑪eTfZQɓ/P+"l_ګWs\;dZP^m7" u%ݺVUޒ5mۮxaȊL0܋> ]dcz9knOL8da7{e5ﴯIڰ-גUYUm*nV m5V//s^ܽ0WapEw`4HHQ(3$M?1T},q) b+cz9_!}>ha6f/Fj٤LJj7X$%/)Zɬ8Ȍo'9➳X :@ɸtoKp[n`A %\ ,h焘( ΋urhwz"o/O(yTFAxn qS5^MuýfRq}%Uu1W؂FB8ggdgg.1wdD3g7fɯ/eβVfzI: <~i:sb@z) &}҂୍#٨hpƷ"n0,A.:5PZ}bqSzeQ ~hV ܬ-V{^}nwe06hxe떄6ڞʓ/fi~yVJJjȱ>qY4Xwfg]ւ58p;[끐$\΁P;辈gJ x] ~H^ѳѓ:*}aSp:ec<8Ḻ3_]@`!JI+9bē$H4Y Dqb`tR/S2/[垇)^^ݼ`eli͌Ayz/]ؐh1ozNN) >'`DYHS<ߵ<8-}Ae%ۖ*lCp,ic$c@I;ׅ=YbAC{E:f %Di5IJS`(>TRTdB8˄NS kYDMCMCM;]@)fEpb}#07xq}ƻ)7vdF6zׇkm]'O'8\2?Nϓ<@=tuDGfVSV#<5֏<6B1q(~;Ý[d{pM+@طYi?aI)PB ] g_|Yb$VVKQcB˄^ǹ2<97 OV]BYֿ-Z#pgiή^G;@:)KV(Sx9sit.fk %MI.P-2W!FS&RHf)!3LI\&{Jb,;A]`g3S{Qow}kjqP[k?d i;I(sgZYWKe!2mkHl=cy}0gi&3( JjbUpUp}4pz6̵q5ۛJz˄!n( [ (=h6Բ fsԛW5#ݗ3ljbHB40VĂ9WޔO٤iJ^X`Vtv{ne|*ˆ}Qtciۻzge2ղJ"2":Z>81fC7ct1YL"Gn#&xwZh-M!*WJW&qO[B,c%a)^ /Ҝ \׬e] gT`Cωst6: {V5]e˞Eq ՜FkK4 1:t!3BKcUFLsקgז OÇHRAtB&U1erzl'CzHv9SPDA=S%e&i ^Y4lRF)8qwg gG=;?rZ!AB RZ$F1ʥ]y%'LRd̀BzC6*e]kNXʜDftsu`[J{Ì.YB'+R̝ȌSP;'a}9|ށSpFJKկ?>N~#孛ʉXѣoKe;^ZG]™depfrrTk$  Zs>s[ 3w[xSc̴>fRk{`j.NT'BY IWN b!SIў* DhMӡ#Qx/=,]e8,(ujt=Ļw|6/( 8*~1`/ڙu/a:-l8uвSݻ4Oz7=/,%kozwwxs'J5jXx {i3ќ>{v-u&6 :y}5y8XPYz _w?:u^+EQsX失^L.3>V7I*Z7U[o^T4 Fׅ~[MRKpt9>޸W7ɸ.y*> m;ۻi͘mc;k/"WDk^ u@ۧ=GYe77g7G/¿(vTU\/8>J8)ݵ --,eM~8Q/5F\S̕5EOXGVY^vL-"WmPr][}ӽّ4Z_I@+wNVne -˒bV[.'InJ+c*ab \}+?r pp}[st; t@+>~F+aG;Mbor.ЁZ#Sʯ| `$ڑ. guB&$.ɴp8$\m޺ٓmLlZM&nn]}s0~c#W 6mW=֜|Hc;8PFG0Ix}^=Bo=d#K6r],ٗ鸑/[y{ [i5~cA~#7~#D~qG~qG~q@ŏiB~q ˑ_ő_'_y_QoPo_őő_ő_ő_ő_ő_ő_ő_ő_ő_ő_w_ő_ 8򋿁rE% ő_ő_ő_ő_/1CeF]LMi.%RjJrs5aH)kJ+ɘ辜Xk^x 3|trzfyZVDIY de( M 9؅%=@~M,dqCb [nHWwu4`([jBKպׯӯ=ApS)#XRJdN*"$ M7nTNK{Œ̫P$aDW7摔emLie]gvjxe{O2'Xw_t`hw?1mQhˈ-d ن$Y)& Ixi͸@@eZV7jTwWծ/j[S]2:ǘ'F:0.&fNq &cE UQ*ˠkC]87;[uPRgZRl; oT~ߎ4+†g#,HEFbA DFc36 rHm+'HòTo$R9Ab"ALPkA 1H]'OVsPP^YC QZ@MRpjҔG-X.F$<2TBZGQPPiaPY.bun6m%-~H?ll2\/%Fk)9l2Jf1 fkA4:#iJM 7I( \J P[vq/E{"t}Ot|Kd)#;icA>JXrў;wyn"-}at[ sHdܥĥ s ST`"Z;sp4 ͞Q.Q>piԚˆ(qaqs.I!vQ[%==pvLz!JYڮ֣)2~}q(#[I+S!p[*I&^IlG}1Cj$/9N3C񡻁nd1L~w0wbg~ p6[.;-riYmr+-1=tҽ4{4Ԡ؍:`ZH>4${؆GKK\m%?1uxFٍf8&/hG2;ŷvR1E3tFgIøjG!:K7IôbM2djV*XCE)-0J('5 c8qI#e^yoFI]9O 07tOA?0..YüW(el|8}}@y hC1pӰu:):ݍ:a|Hq("AFQ&պd e|a\b<'yF̉1\fڲ\geQ]Y|>wû JWM*L 8wm\jEr,IBd"b:͵L9 Nj)HgHd:ͲֳZVbQsF, L͙36lKId@`%СJ"Z@げR&Pr%Rɴ(plt+Ɔ( 8,%L'gefQ;@z=ЛIբ*s]Ar20Rk͙Brd’`X!gE"h5gԮk}Ml9Kq>f܈'F7ʉ~NRkwܽV lCɪ9zUs)ekIu%3kҸfڏ ȝُ8]7dx ܎-m|R0$e8 &1"K3!Cz.XoxLB;ָ,OV\RS.  a U[k4/Fg^M4%j_d;FZBnL_+UUfFq&$3MluxB,MClKy<+c Ad:ĩT39e+Ȅ|4jA[N{폻pDֈ06}=JQx1̌N66̿v6WhԾ9j6 [Nb~e(RfL.Y:]-XY۶HWUvtZCvj9L]%J6tjS6ZP t(/%ijo 4 0,BRBWv PVH]>]!\B+@uB5tt%$$$n8thkWB2ByCW{HWR Nh@tաl Q ҕ|W8 W;BW]+DmCW{HWHi/lq[=%<˒n- U48s=nu*ze%m]^ WjDP%'nExV+r#uVhLFuX5б$PһO~Y(Z,j:n;]䋵zVTǦgURLzyHuzrYzj.*za6߫˟!/@lwcm?\J8ImrRH{[ QɌcV<@Wq]}|hx}~œW/x{[=u^kGIdC]O'^{_2-ѳoW/|xQUZ7?__.oٳqxc`g8$!uu4VF{4zmuEO-~ӁGnG,^/B|owzy˨O?ؗ?<0_~f ] e/so< 3~((פsrG/{~D]~?G40c05)6gԯ{R_VU\z Z~Ct{1k?柈 Bf q,&>lGц"+D ҠLrC*%^Dr4yg^nwojͅ>=l 5*Ok1]?Y'(h|{7k+L Uf QҕU ]!`Ã+k h9yB qJ-E$X f釬ry\ZzG_]v fl]'**0^\J),&E&L. IubD]yWbTpfoSУ9Hcw[eAPʤe)aǍ(88 BۄV!DY/u4#*9?4{ =lE璼=Mv5]XlH] m% Db{aԎN[Ѹ؍oϺ8I|9JuJ{b0Q^>! / n 853mvK[\8\d8M:[&JLvw[PSHj|t'd` \fM}= ' tcL'^ ^ȣ5~pxx}PiYBDkngU*f7XeF ۼֲ9,PKPƍfY7nn0K[MoQ6e-SЎ)TSvj3*hN-qp;l\j1V{y]]pPʚZ e8__RPˏpPAjSwd]?+0ҽ!pHx;؁7LEy vR}ǹ>wnXx;f]=ah)NoqMzIZDФ|\3o2XH} q%f;Prc*;TTR'v[m\Jl=+N-(o !jp$RN0LΛFJOM 7ڽ9m}l . FBbAD)MCW{HW1;Fj+D;KBgքCWP v(ec #]6 -VکP2ҕ(]`KѮn8vD+w~eQ ][TuZ72,C+d .ةz ]Иـ r-++y0th5aNW(}+N}x7GW؆CW>]lyB4tt%5DW1T"w~t(-W ]!]I* IZ` Z ]ZCww(hjJQO]`h0tpj9 Yjo^J)jhu-Kz 7-h,~=ww[Wmn$zXbwc'% ZGz)?R[_i$v ]i"9rd.By"W'#0THbd8 5X4QJX4{h i#TCWhuB#]YN$ _e8tp y1T5tЕYMIOWV^_^--R[te+ժ]Ob;s)` ` *t(ijqvu8tp- t Qq F$ ҡݖj)6>ҕ@$R| * ]!ZѮ0DWX` j ]!Zkv%=+%)'!DCWJ2BX(thWNWR }j;HHI=@9U i&!3[TJIQh;g{ƱW1P|? w;],0bg[Xvt1J#;VbJb7jPǑ(<9<$IkTc]Zu/~igp?r_/4Aq{+ݩ=3QW\NE]%j_̬QWDb⇪+ X3|2*թDǮ$N]BuE)5`/ҰhX\ߛin۳b2Ev?LE^f@ٯY"V;GJQ&g]OѬsxeO 4Ozx_`!lNmn3P&,TPÝ?;\PQ`6ZA/FRAQUb#e FxFừ7+adUmW~dYR oљ'SV;ͺ"+3&,{a'¢`/ڃ+ i8"3yQf,VqU;H\V?~U53N+/H^rÜVɘA6ƒ E!t*Y|[bK1_iRI}^~Ka\2-wx7 M>|S¦o3(?TxkU5wL ſo ;zfelޗ7f16ݕ+K>۫/Ғ.lo6&Y1J_Beëz~fL2c7ɮ9`Ӆ?p1c68M(rneZGEdQ0A[}JHt[$ļՍ4dp+ПW\!0Ja*gbiV^|p~ Qx6ޥ\QoF_?֟N. OsU0ڪ~cۻYpV}9.f۫Kw8rm1ʫXƼN4&$$ _!$⋑2-.KA)lE՟gm?-[FPu)zcL™{Q*`{37-c7[}I4ĤAtz0.P4wT91*̍p:sif4׈u'+#3`z/Mfp!KҷE==>-_s4; O3Vwk=?fcP 5!uVhc*r~bR,38S1cHXG"\J aQ&J;f2µDX/5^#1 oo5d$*dQ(s#( L+|u6J~-og~85Xj>ze8aXj4s0*?-*OT{, Kc04FeH ^)IQ2F2^^u|^$Phs:RHitL*"VafXTZ$$"2QA L%(WKƽPӡ)8;a]]؟8,mrbjH+)-0E|/Swn"\漘-:@$NvkrPRJo70ZC_M:Z.>FЩiIl&GeY.`pH'ڡB`G]oK jЦ iRf>LnWAMJrh"L@.$46,]Il̙,I#Uh$~: 1Mq,՛1guLw/&]fjIzAZ)d:A,,pRk9NQT3b;`e|;F/oN:Uy9qZgu'#\sln xGADi1d d H/l>9}3Av5'G̐]M4}󢾽 !J2'cj3E`V e}5B);5&vl4:ulq)PO1wrv{H.3OʻVOnbo:4߀1T)bGjd9HyJx-cUK=0~{ߦٹFf|ZZW KU@RkЖv@ #8k/@'8+ɩ%NԾA9{Q)n-%VS;O&Mk 3K mu赃v$&Vx ڟWV`37it8b3zm6Ϊ9m}v~EOaciim}O5G Nz)A(>*L Ja꽡\7"8/LV@i6(҄Hrfbi\a;Сt];GD}иEhxN<'C[ʆ `2{bY p%֑=(+(C)%)MK܂RN(wۚfLO;9kJT4( __B@WJ +pL PVq*`Ӥ6E{AO0k)Dn(JG2(M ?N ŦZA-7;?6_>S ERRP`3`0L;XȢwQ9 ok;ъ8S[fPGicY:c+#4H91mux2=ܦ2I[{m> YKL{IkEMv)uS`,DK"ӈ Vh` M4[vўxXӞ/"qcC,5AE2ޕqd_3T!$;f2dcuJhfS|}ݤH(IV-5u^3ܠIY7)ufmu8v)?j=M -2MHȹ sATXwb'֝uSo(y~`T=*w zy(^߹Ϻ73\qJ/L 7sAHRod ^H5Ǘckܿ=#X׻A rHƓsI>HԔN0o)RD;C |X׶O]wh5Z`(3*h2:ڀmT 5mѡN\V Qqx8ug'K?=Jz#ǬV;`MᢅB2^wqE< L 42Ɩܩaɛ~H2 ٴ|-Û3YíJVouwX8jnB}[8 zQ9 ̽ кSi{XAD6Oxn\ͺChYB)|nZG=?4~tlZnoXÜس[7f*\zCČ4ӟ667cM!utwQ$^fG|#1RB"û;jK >ucm,S""pK-sJx>{n~`Rn]Avf?%<ސ@bh<Mո~^]%YtSKzƓk_ E WLzyNGzJ> m.(\qo4lnzo}Dڵ`x_12̜,|(n awaz(*ujb-htuR(}ap2_/k;s ̌sɳ.^2S)C'bZCj.NRO Y*kgl}AvmqF+4 Y[ gI iR|2_ŠV{S (a} ָkFm:O;|T3" X:2 zvg%6t}:('I)GP%܍'33&x8>j22w@;qg 03f g;qg ĝ3w@O@;qg ĝ-qg ĝ3w@_C"Pq~BiZL", IMa(9ʉo1V#h3F[@ܹy@;qg ĝDJ")@N^@;;qg ĝ3wv3qNsɟ Ԯ=WMEOM)M1X9qy)vy%p9 M|e,:uyo3&G!r#fO`R-,K m)h~CJL{ɱ$QK-9+R 3|'T:WBW`~񚸝(V) },)np0%sH[ ̉uXN,ɻ_V0#*)dFR˔fiw10"JQ%0uo;cm,S"cpK-t*y9=+G -U.^ό"_v Uxzh`l2]$1;9-W8YۍnzuZKuom~ofNe| #gnwث.3݉emg\13c*`g9t8ANf8 SdCPg藢EwmqF59Y[ gIKm}s_ŠV{S M^@[oqbgԷY%UC&́~A'J -%u^rÜ%Q [ZcI"kK9|E"g.K]Lzh.fr )sL(QmGs+:icZ)"V_"D4>߆Y\0K_ο_Ll6Bl(R~xQl K糩 jXvIwfo~ k€[&ppkm궞G7+h;3+Zs˦Cmϭ6MڦyZ,p/צ*—Qo _*hp<4NakMOII e NĪ:xLi"UgspD0:RLݹϫ ҏ硯J`XtAl;5ia9 JazCK0)xXԦ:cZ6#S룺EM,樂FVN Ec_85&bhY`t߄{sv9U#8W vx>Lь/!8|['m?Δ;_(Mz^}1{_un$zcўǬNTlަh.N t޸a'䭕BD`e50Zۘ:EW:ffp+k#}h[(`j@xGMDi1d>zH$.96]W4ܡ]G'dMPlŶK8MĽnAʥK}8[ر4ֱ-)+S:@ 4mZ߱M=윭K.'{I]IHonbo:4߀1T)bGjd~rVk/ԃ3[,1Vٜcf;\ﺹٹ/}u jF_sToO$)Zl!+>iRlQ,K4uV K/ -1I}ݚ- Q}D=(/"z+)0NTUxB%"V6kmڶT ,s۲ENn!ooӯku5vU9R]B(F'ȗ,RTZAPE!"f67_=P`$Iz3/pt_QraaHORDմ^Vƍ2Yׯ&gn˔D!e!x0k5f,`ZFL&Z i itiߤNǚ=D30A&)q ר]QQ8@ҒD J˽)Uc}}Lu4F/R@Dm[wweO{ USZ\S#RS]mSҍh-J0B))&)Q[M#/cL=g'vM=úә}&j7 J?{$ˀ&"hd0dOP fh/#HHLBDnd@ֲ_E.qVq"eX͎ r]҅d#Ftta92}E<FHHjbKFbb$S, A9,a#ֱ.xlM|4sZi䴓=0G%ZDmUH裫G}D#-#~ Xx\?4@_8c(ܥ{mжޜwgM/6ƞK> X!0͏)0~; #?~nj76JRغjPYzKaaYka8wWh]6ǻgߴ>Ii% >{,/msmwȳefz=ܭUQahü.Gh:l?L7ji1MKjd%MmM*T-`/ ~x; u4ށegzk?:+[߶׮j' RJ/mPqxt2@$svIom}ol,wtE5 w-5G] YU?M&&Ymj{p`0~>݆.hf0/ @K)4Rt`]+.IKΔkomJ<`k Q@"_,ۺ yu_U5bOmFy p;JZgڍYvcߓM%y 4mE?1x#^rymIȗZrV:0óGu08WGpVw*+TX}54{a-4š]|Zލd;tB䌂H-dRbJSgKdbS&N^39sKp3"اL p8%bw%̍p]S7쫼dzTfN[k1G_Ndʖtm&n|hdAĨw)Ҷ9$̄vc#n]1{ͭVM-oT~{!!`!C\lWvY4fs|-gv4:鮖ܐOrG,RX:~9\ JfhLWqa.@`cQ "% fn!~Of0qyneIu; Kӯ Mǯ-Hd`ā}M7[6`^t Ă\jB&*s2(F#$eh 63.25ͫ|H+n%9 n>DŽf4[xUϗrHtHו1{%bm՟=gӠU\%ϟEu?oNl+<^$MF&aK>9|$N (E{MWQ}~+awov'5Y/:d1j{6z7U}gK'Wdߖó'_FtEχg_<ߞ7:}ƵoFoG~;>9yr5`t6z?IɓO?v0ڛ):C^!uϝ }& n]*s̵@rEap[ ܟDnzdn"$+{7xOx|՟y 刺ڃ \; g_l1&~9 ex={= |u+%)Nmwrm`w^p}6_:)QtY6_vyRXێn1yR~*cn!*}I8@?NH[sjleAs7NJCOWY "6pBJ7uFVh;G.+u{}.v_Uڋ^mį“~,V wyjGUvj7rաQN[ՎRz/jG*9:*uŲꪨۨ})I0}@*,*PPK1:vu; QWG],O| ?u%CQWZv*Ry4Vؿص/z~ed=q\{u"ժ;=y]|Pi(-li{{$FҦg7j#pJ˰* \ZRw.Uoy?R.Uo~GR.7Kѷ]#ި.ǻvf{v݀K; Nݘpұ2vR l7R\zDEN4e.\% fT~wt.5IO ҆JRiqR i(#]]#sȵqmgi"pՒ- )J"^e;=5_D<\0A\o\7gv.`n4$gfyi N>bCHK,PyPŠcMגx$]m&u\5#E|vel#=v&6'M_FYɴבsaeä?cut3i([F@Ys8q 9,ܚhA͔cJwsRW#w!H l%Ϻ5ruvς ,0 ^^ְt٫i'NۈiJlnB;Yl[2]VÈ5ZRtt,*昞N:>k:f\xq`c~y}Et2W{{P;býRVϙedԴk6tPYk;{Oli j[3ÂM0o"NqÃ&qy^SinDa4׾Ws946Om 佡a=\``jZ;b0uO@C~e=1#ԚkMiٜvsE1!p ,f\9$Fz)qN aipDZIkx`Zed4 #,BPu\3Ny0\zB2^e c=U$F \Dwe;nE(ǷdӺK&Δ^yq0ڶIn7ܭ65YY|G{rW77= 8mh$ !6%b FK ͜RX`S4^(Âc-`RZHnm%3EHxI2B"X֪KawV¬eq}V9YD`0G sdQB̭ŁI sIYj#r' fF;*"z9)`06>2af.H(C*܈@"SRQW9,5Ɛ,Q .st+ \'h0A13I4_ ڤ80Ʉ a4&-jO,R` 4(@i85;eT8jQּvQk&Izw1:آ"7 96clՁ&v 4 &ks U3B.VYNXqK A1Q,`5V_Dh|)|ȱ _`0B,FHZL1!JbH@IS ^ [2iģ>c?t!JY:c qbsVo6UҼ=E\r'g|lt->$7$"*dHh R!IkǂJ(gKםo{ԽU L2S(K:+؂1q$l &V rǗI7 3m5WdZjլUð[xtQp~G۹cbHq4ҞP-bVKi ^#!0f l됥ʬj5Fag6-{~/ro8@9R'y Nv֋T )SI)g`X8y 8y Rsak9 XfHw"b(DG8d) )ъae9lE77eEg;| 2|.lNPSl-Nxf. nUR_ɓl]Eیp9oi76׮Q@,KDn$Kz໣/)6DsosIX;-jטçZZy#{־ovShyVw.șF,]IpHONK/.\<%&}TMPvC)eEOu}a):y-_ʦ`MIfJ~*.w?=hU&5)My13B7ɷ?xcZɥe4nL>ƃ)AARznur#pQqqÉFQR})tE D'VKIR3mpk&"X5D? 2Z}F[b_=E7y0„0me1#P*6 RfqfAJJi6F/\d$ mvXS@OfxO 'X J_n2z@~J4IdTfp2rcK{]_-3n1@hckmc)*>eu>Lm>ZwstCギ0enM!XJ_kJG9DZRCL *h&&a\m4a\v92]<ԇ"q8shF`G1y0~&嬤Bh$(h+Z[Ī@@@:#FK- Xݞ;sBѺwENŵ)Sjhp Bz nyXtqo2QEB&SateDyU::M\~c\ռMQRdqfITXRCKb"3#N%qfyj|SV?8۲6M:,4ټEAeZ6c9'}ŏ˫2,8_ ^zjվ]l:\V&̑d|X/ia_^s\v6]>o?QK#O _F|D]' ,Y\|k|0 CYO ?כ*-/}xk+SM$?>|Z~'-Lm^p:-wq}Z~bt'-._lps_ME}^ t}U4BSR֡CЃ56dY3[Ĕxb3.8d'gFKe'+bZwwNLLIRFK&e%s0` 2YWLeFbކajMw9RSH)kPiEL$Ӝ3%fzSȃo* 1N2~T9R{Ӊ7#1vu[ǞhұwKtB<6]4l妒Wb?7ߓ}Aq5RАwq>P;Ȍu[#Ve+^E~k{ofr)e=/2dC\&Sl>qjr~}lw?fsUSZ=ZwmKH!QP3HECxp2jeT m8%DzFJ221sF2pS_{O=f?:|}N ߑd.x~+S>i;zIXQ̟i-sKgΡӊY\i+`\ytSkK"&hRCE&hĒ-4͑nl^&7lўV$N7I@س^ ur}ʭNjC;0]}>ӌ/ǿo.@םoG2wc>-f*1Dݸ5U+^Kًc6B*i-ʠR0؇w1||%}xAR+m=VB`VyMEr3FL*.3''͵+ߋY?4Zׁu#K՗Qc~uľ˸.дW74~66.F X[YV`2!EM4%#\2ĔYWLN =nŞ#^ YXQF g 9l`L)E6j\0RQEl|/~ml7dcrƴ!ԬHrW2RH!Z:knd1 2NQH}Xhi6h { fWV㮥q1k[$ >,yɲ*\c+g/EYW!uO9\2am}_o/\.xx!q]6;]xR%B"O ԏYkI[ww樉yn2߹|wT5.V&mKȫX@7i I~Rꔶk0Q69U\l:*}6^e8_KSM7A"\Tu0nOm?%9紷kD7ջ>t_QzҦ Ok(J9{zNjRm{iHҾҀ*C``*WqJ*c )ScwX)X1W9_8:R4Ej>UMY4$YjQDkX'ȫ1X}zv쫗~w>);H0l` :\g<xnه6_z`:(2m^6A h΄o٩42Loڻ:d}dt*RǔO51Y8#2 _X:(b3O3{</[uk?|ҳKOu|;G ye Y; vլt-#1Vn!7"=?RuUMpP?Zi T+)ל@RL5G9"sMP5Ez(KH?+wӤ!ĝ>k2+e:D2D)æK[,2Yrk%F4r~qo`B{+TOٹUy21c_w.:ÏLl/̉ ԭVXJ7VU\ժJkoR:3nz[jWWr0pU9pupi-WWUJI#\C\KW8 \Uq \Ui;\U)w<Ww$ V \A\%PJndWoCO -_?|N2\=H\^9|=W҈+#\=v0Dv@pUz0pUuj(pisWUJnFzp%$$WU`gWWp=Ғgh] $C lj2hFrpY%x'SA tZ+PE_5!l.TL)̜J LJJ3'+&ZuO]OX YT}#CFG"E+1xϴI "ܣL{=/eSGy2Ph` ڮiL $$I$h^ `46ČU)S)֔f" fsam=4@l =#1Gs.g>MfXJ%jn.@P \Y⧱xbL$pWܰ i|8peyh"eEk\qO6ڑS] 6-4Y:Z{Ted<5HM!4)4jfcNϠ[Cx2#/]r7tJ$}R;9! ]nnƹ%ŹFʍvJrx.-F*PH B*a I \&5`5͎z kbj/ +c< H8,Mw1ot7K-ܡY#a™2B!E9FR^iQViz7蟍A-P/!I)cL19e6i"E-"\/9FG=Ǒ+ (yo&>@d 2Ɋ$AI5f砲ҕ"MM;At1p kL^6Hv0DdHAJ:ԚEi+߅č#Ks|d]~޻Q,' <qM"-nWz_P}EwWtsZDWU"h ]mn (S3]aJ|bӋ{J[-Jp.]V]rA`o]J0]"n]L/l߈~P&)+!Lp7E[ЪxJ,X]E2ʲEtEov"j ]4kM=t l ̋443J:G )'YI|xOw`GCly+Ԥ xJg 6IuΣi92yB &w'#b|z˳a wM ?'AI%v@Ÿ9\Uwu2(GZ ɠ>hwջ"&42Σن!e1e}_+0sMP@$}zU5$Ca`_Lc; A0, ٪E. OkƭӇyM4Gx%_oJ6-${ "yҖ5RP[PJkLelgyk p!BWVDNWRλHWc[DW8i"ik 3sss" t%?H$^VE-jp]Z mrKt VЕ\$պM$߂}W܈bF[ e3]A:i'7jpӨ-tEhd (Ӆ@CW2[DW,h ]\m=b%*buuJAlTj$F-tEhutE(m_u/BWiA"+)[CWW&$'2LWw4E""n ]ܬ5t"^]ʘ.UiɸۣvQ]Qc&x;{U05໠-qw]sЯFWCo߆(.v&7Dۺ,jD)eU8zka4hv(l;tTi69X/ݘVޛYǘ7о~5w"Qi^Fɓ07 iS:+JoԲfܲ7h5z伕uUMXQRhQX%g7jla[eɼsZՋPȅyUhLsbqXJRfan2 &gTp5ƴG(4=?%U!rfa=zxckzݩL~CP;FYnC",L!¨"Qr:t6%~ޒ-wC}ҫ, >\m/nQs_xGݻe. מ\Rnқ^M}We=/t|ww~8Q'O['onz2WXg+ztvr' J$ۼ*H|./Ah۞Hj )27EEZy^8)U VVD6.g-EQH4O 9dޟ& M7.5g2[vzŃq}?l~aLդRWY$a,pzQ5?O} mm^waeiCc}mbM$LEhΊQH.^ɣN[Noseޔs|Ym-T- ~4ad^zvÑ"].68$!5mQSo_ɼvGzr5ԋ eot{M+Z(:xD;MQwAe`]g/5IP;+7*t 2+4M!L@\f*MMĥ֊Nj()ԙiQ,MdV&3Yiur,¥y\GU?{ȱ\ O #!H5FO-R$͡dkTσ")r===uNUWWzH({oO"'˞ײѣs.i<"Xd1`i"9Jo=RIQ2F̹tRrMტ=dKly|@4:&́g03,* nVc LT: '1.ma?k`=D“脔DD데`d^`9nq\xc$hpkx$,o"i#KkCUJň)"4Ia !; FTH/Bz,fXbDžB鐡#K⃖dUaؑkFp_Ka0iJj㑻)mO FeĢ_OS, {)DbCrVgjϣm.t<~*R ַZ#ULk87:iF[}^swGףazxE U+\UqՂl۬= *G{;:E$ t (uXfTK 0{w QR@ܸW0ѯU/-LRU_շ8(#XjF>r5#})Gx_)hSL0t9ֻd=}Dis7W^ʠtCqے$5bXO w`)"stq- SN/%қ0Es %; j! H AMH+~5v5[RF0Aay}fn{I:u8m:cd(^Cɉ1sEZZ/qNRlLQ~P*(/W+/ß6`9s@L_nJ@N:I7a?vH ŷ7P2ܡ]G+d7*)ֹɆ%s1k+[E`S5B5)&vl=tJprJGgJ`)O^C%lݗd_TNAZ}|{sH C!fOA: lp6HF7g)O=83Q x4K{*;i,b4l}2Y ŮVƥ'uVf"M8Vk"B~ 'W M\ x,Y $8/H:̃'Ddsd'GO*gwkDO&FbD痦{fSv}u0G4>ΧeZ{-Fz)aI1on{SQSǴ&snn)Ńs˛ \tL 775bnCM!#ΓBp )* ȢWe^f3fa܄E4xjGk<0k=ӧ/҅anjh͠N[Dud2;uco='oGe G eZ30{-#chnD Aﭷ Jǵ AM/}I"0$5IXF|0aS[)rCqt {j>Ke-J[.k{zJleRcPi0X vEkQ*G'b0N9ťo)'x5sf"~X;u V#J϶M C/Ҹ?3/rRO~ݠ0ɸ\fPG FQk %I/P!`V0/21EJGt`ZD"rۛ|Ϭ}G\H#cfG0 ('ֆa#Ҙ!(110FHH5pFBb%#1z ",iP%LR#1s뉏feeig#bUzo*$uՓ^ϯ#ՒKy{_2^i^iOsN$𒒊U)wgn1ЂrG ) Ŭ*%wrqr8:$}r)%{kg*?hXK,z̝vX-"ro0*R0#4'(xAi39+0wUR˅I۪vywm ?s_Iŀz t7)U t#?(U ( ίqzX,Q` e +gȃdz6mxɓe%Vm=n8 V^ypl2.(|7񍎒K+2}Z ޠY /fF6i L ?1|Q]P`$ə1t2(Shgc.;?-~09R *X0B`D$4M!HbvwTxs3XH>U֗w1'fAݽCi6{V]n4<ڪ=Դ/{ȯ:606=UQkV]8#U$tqcaUM$؟.~dj?d4>yR'Ss8Fsҟ?;ϓfum9o.fI dCB{&V0Ш4#XT9"rݤ&/IHZJǜ0WBJAY&D4ęeFHuGpC;HٚVm(a`οnT PV8{icWMVk_ܤ[ThHr~d TdW]uD:ĔZG)6ƱGJeqKt4T}8hJ${pz~hm>@jd9,[1ʹ*bVwWCݧƋ* >H_% >&H)qKtSo[; pI^M˧WkIzb(TT}#՝_5W~loо> ]LZ7^ F?㨶 S:nmqt?]:_@m.F߳_=& SW0`٨4\BE]Rqٻ6eWtUsn0Ta*(丨ybB"#ٕ@IHhY#Hof{N(\ghU\`MXeUKYUUBS7W %_hҮ*d0gq\bZUB)\=Cs(T\`yeUKuUUB+S7W +s B*]q*JpJhu]\%giգ+,*Į\Bbڧ?՞P3,U}%JkeF`"n3m5bM/pyuҵ5 YK[ C!fH |rVk/;^sXew:~iM]-s믋Wtuj .:%4c?jY#3梙B76BC߅/d(CS[t国y{}jkx_ZSp$XuօBkYj(sB[#;߮owz[]^nHj.\յ;?wzt%G>Pqw\us?w%7{5_^l_>7ߘebXmn|l\/GtO/f83FXc43̇%)|@Aa:P0DiNipN>'K@yPn;Nssxp8߯7J " T:  [f(^W"dP!L@ N_wPF+ %ZB)@\` Nq飶ěU`̙87Ãe9lOP@tY$.D䤚 ן'9jñBb0FGD:łKsĄIj$fXfm!Fco^ȅ4*3ABc sN Sm Ҙy~a㗣R  Lyh$%`6X4)A`c4"fq#{?2wB7xscpui P %|^;7э\5]69yso_u_w[+jR_jvkޗ|!= U0j^_wZyo/\[oFvu#_F?}ofn=\n+n<ܺ5E~|iG s ZܭnA6 A t b{;|Q? 2~Y߇m@]_M4/7cp]v28p/'пGQtA֞B@BNVS݇:.|va'P\i_&+ҴZָ'YﰝO_:l7g)\K{Jˏ_m^a#ssEΧm=4ll}g3,siHrgxzco?*6Xxf2)$[G9lJ:EbI@,BǛ5b c?4ijq`viYY7/.Ҥ6~`nbԱv紨x6{?#<_ 2k nuZI{BWrcb3Z!s 1W .! z򑍀Us4W)ʪt fDU\%W\%ɟyRb2Wߍsv=@#Bm+3ZPĜA+sЮǜr+d2*U* ˛Rzp,JpJhgW %[h(+T%sꘫWVnJbW\1p9b!B*c\ƫbZUB\=GsԄw;`kk梙G~c$=RdѬѻVv2I=0\dS05~Slgv>]p R _/~z+, Z=R,{uC, cܻ-W>"Z'dϦVcmsm­Yv֓/SG:uxPW#NFN(+oC"1N9O*焴&c\8{myH {ST]uD2jRdn[8e^AZ:"e[~S66hv<Unޛ5A{0:vf+3P|43ffeK`:!rFf `hf)1H4+Syv;BʨI̲G̒tf(`|& 5pzX+|'==#eQ ^l`zPMOCoN־Gfk&})17d̓ࠬ)&Dz@j=uN)J.d/.L oBSݺdOa!8\mwa)HZͿ.C~^!,:7N{YeJ Ρ0@De Pt&-ÎH*GqVn*9jތP:$VÙoz];;ѻ8d}= f ~NЗ>^$a⽝VNťHEǸ{qVL+l%Y,f!Ynu OI3AfZ5婂UrTe[_MxW⾃wǤH~6vtmtnb~5U11)YL,`Ϛ~ftd}kl @30$c&Y&U`ZV~5VEXNN(<]hq^V㲰IYTO (sVA-m=3{9A Xċ-lFkѱ۳yX4g ?}^o.X]4jboEPKE|.7ӠR)LJfJNJA|R=F,B*B2 PcfU$`YB1h |R ᇰǠM2a(dj2\]4:`I9%GUA)L74*͈=U 嬈hDIӒ I+P?3&G+T4h"I`ci9$8oAfp1 qE7Ņɒ~/a'&/͘2HNv2S2FJiwrKyE('n_t┭͇)[iP.h8Ksa/01֑=<AN)@LaAbAW[? ^٦4JE#D"XFz AIw:REHQHW p9`hKXh0* H!UR aEFӒ48sYWj jlK@KHJ tb|>K3nRiR2U"C=Ee(1Vːp/UD)of`u(E԰H(6+UrOZIvBL>`0L;Em# r߂V G)KKrоLEI1ndY>٦ĝ*o) |?g6Es 9 2BcNPł!(Ȑe{Me !nm͈nw^OW<5C=8QrG0VNa"-ՊHO#&XI >\1j4@nGZKR㚌eshGiP%kx4:ڴ#c#YH6x&, Zv!7˥SCCr"B X)  %IUur^qAW)T> %_{IdЂC΍{<8&UƭހjgKpc8õ7!cZip#9:RԬY͂'͂>mt)k pnjn¨x|-K*Ljd<9+냉KMdjA #(0!eLD-zUc~m^t۶yޟcH%&ux^ pe]ȹb_sK۾QNcF1ZleF͝QFGR-:dХlQiTiI*+F{f c5g5 TTIZxقۛ2/2- ̌L8w*$-qr'/A y~Wsd&#QHYu2LwZm[Y4ih$n}Ns1qr:s{|3xڬeI%'j~ )ǎ<,bWJ%( Ґ'6 6-_ZqbC ۩=y1δ̙l_ÑyA0 Nv塧ӢBw] T,uI.Zf^{"ka6\C%.Ȗ0'\6Hc ,Qsژ+-`iSPS4X MH 4W.[^szje̹c+ڇ;`kg (ͤRommt=|qܜI:fHT\B@eve嘼{#2zϱܦ1IGomd YM7onexfoe8L%-VZȺnc×7793l1ep̌0 31ZVL Hz]Uڱtդ6RuT`y$kҶu8EnF&)7o$]ۋ CTﲰ2 `E-ϩ,7`a2³-OP+"e[7- `55;ᖧTV h5&GA~\dRZ5.6z҄4**Vԡ&=[xֹ8Pʚ1oM|/?/E&LE5wzOPοvg` ܄!DŽEB m)G)!ғFРr@KGB&<R[>]Xq +RU9Ě`\yݶޓ3vyv7n 2s^w2/LCYz-JZmN+;(LNYz(5-X8#Wa_ݍVrXf- lk6ʱv3@=#vtV-XĉLg#[ -n4[l]nSmg |^i:5:.a #.Äk(̰$1V1 # q,M ?iQ 48djЌ|{8?(rag>A2mJۙ'(-eiSZw%fŘqa+?҅zyjf,|WL@Pn':CW׵=\ >}+<c{5ZG$q98JNWtEhf_g{a^(iK'`A[B>|pu`z7nAmdPR?އ?#+npxj@ese&]}j>F~?:i*HRGr-MRV Fc~5BqLYc N\rTక[:?%-x_CcX;cɶek{{;+2J~q]鏰^&dzwcFTOttK14mvGL핊S `%ܨb(IL/XC{8 UK~ӛJ96ZyVR /PDXh]Is2 6MqB1Sǜ(Ƒ^#}EMwa$]]41ekvWNjGuEpYQk-? /|S3/m:EMHܐcB:z:&=0c..:mg= t#8Emo޵S/QIȡd[7*֌ن=X%t_lT-mԊ;?+v.[oVt/1$[DVl_ĉV1l{ 8y^Q9}تdh?[!ؚ͚cE[FmsOXP[Vo=m4_ , OuPP{a z WƳmQ2 5׸sVތcY詻{ 4qPq'PhFIPI"MLU ̔EsJ1?Z޶kɖۊNy+C6E8IB Ǯ=ZlY'0vP y>"$: 0Q< ijSߎb*] LM1|KpADWI8~XR!s_7}<3sޤP%-pG1š[.a8qZ9̻!')ITACs ڍA rli _(I UW1򅯉bCVi[!jckB\FЈVum3vh m䚗 ,%Hz|=[e6Xn$RqϽ&Ͼe6>_' P `B4T <e"^w22v&aI`a j#)&Ha1TDJ0Dz:^͚Cwk\O.@D}W M9˕u+]OxQVW3 )%b㈃pXXS&`{B+i17ڠXm`#$ϿtyhfL[x|hYLb}VT P \)J  S?NL@Mj6MRM؞ha&KG"e fj̤/iXRеڦNP&RupoNc }A<)#PuDBp e$bXJa!g@HR-6pZi;a[` ޮWNɉRz=.f&f?dtoi{s {rvh)@f*ns;mr>gcRiӖZ&jlk/;篭c|כ.%7_oqٖǍךeOV}`,O(Hh 855Q_@sg??yR~>J' Jh$|7^btwq\?UzGA?m/ݶ| %varQdu[Ez(B$nl@҇Wދ$`BpY&ͳ"ryۅߊ@rɋ|TT߶ٞ=?I3<KXD<t\xڳNK+PG.H0JSuNOE4! AmDĞ7z[O<9b El|.;$K lHvw:;EI$βN.LF.qeA5^c?zU=r ɑuk]Ou.`o#'xv<|5uwAuӮV=lD>F~ y]7,F>MGG71W_t/$G?rx0q]BGo_nsu'Q(G;:>|gǟndF ~AAEa.=]:i)}(Cb`w]y㞼N5/?O;v}Niy֙B?'k~tݝJ#XÛ?iқgvGs!:UOi=(nVqakK&f Gݡ:H@Ys{_5iDKuVuh &AiKM#rkW~>dm؀ZyDgUU "rAo؃-¨@$m͑o^c7uU?tAhѻ]%[V[2i/Aӣv>iAhyTmhs;&o&Yzڼ,V<+KFS+P]އK>n)Ǐ+hoA_`x䣯=?pE`''Uh|s[K;'r UB lJsUFzK@9AR̄˪{`JXk}pe2Hr0eN- wS[pi+/ȉ TeieH:ʌVe؜v!<@ p erڶ *Ńմ67e0rۧ=F$ݳzGyb^؝|qvj) (eDeBAR:j7TTFO+r |L]@Z=yWWcs૒g:+\_|Ni5ާpeX}K_UUU]\X> {ĎU"8Ck B˴ӫ:0/sEG&R]h%U,ervr|q/Sݩ1w,'E8*D5L[ ֹ]SMmĖX:&rA%ΏijL11fzxbb Q?@mbi5\/ )A?QW;x׳eZ`ZӊABxB / }* ]P">:|*BscYbOP >>h&mCevԶL-^P;L_Cݧ;5Y;o 7x2i@+!2B'r܄ġ~+8μc3%}GQ?&^D|)[Z)Q]-#_ҝa7ߩh_e3ȯ?; 㖾lo.Cwl*.ָ#E}2IǓ>GcbRKygh0&{}v{? |𮣁D'ѵny+I{GiMݏ8:gi 1/_]53CgՉ`\tq:9߲VhS}c}8mݞj]`KeQqgiaF[x(^,3mطulꣾIMM#?aVK7>(uGz'0܇r=,h2/ ez)%>*vF@kS?&a$4h "G >(g5Q LG{O?w:w~zk=PZVvM!g`"2 =JA݊WD :8뷛f@!3>ӔњhfaeCభ]$Q2:d}g~|gKyZxtwmwMMRuY^.RMD inVD8+]Eq̅JHLc1}WV\]yd2!!!u#%Sͻ-'5 y(`ZF&QޥiC )"J A.6]ѭ-o* 9-ۆ J}@6[kafW=n+11%/MFHE{۴z}3-9hkJ/`NQ=-tiЊ억 ˀՖ$a!w[G@U<̼1Zlc&z6>neC';1zLԊ>P6Z?I_wHgK}T[2n/^):LIӣ[nTfi=L@OUË_/_ 3ꋷmcݰMF71&eW m %̡n _2fYm>(n]uu~9E*.m֔!|edɚ`Qc~w5w8on9߮5{1;7Քl6ˆٰ%6R7QqczɪEZMFcb%tHuc6;<,[G0 [Ikb&M,Uջ7vى$#1sRFa 0QJ8+|."$Md$\cBR7H0r\BP2&;D {LkIa:}DPU-Յ02p'h f;se[<)sc7THB DN! bQtUĂDp{$T9Խ" hTS~ JNj#&<ńp$g5S.yTG(L|ɁOed +FD N0"",/C*HHZW/V zs cʩqt~q躄XcbAbr}:7*\,60og#ם>rvۄgjˤ+N0םLnb)&EN4YAw0 R.̘SxӖjH磃սt/R[g}x;5"1!w8YC.xhYhH+l N2 6ӕ4O=G8D]4И`'Xt2י}H mLil(HjD?[:!̓g9ݩ~̵5zm p}\im]$hYen~@JeYh,Zߔy?-sLܶ㴩Pf2o֌LSbUoSΔЋK%!lOO8ԉe{mRmn`ʿGQb;"DR ]WN0F7JNAtdZk#ޏt M,:_۷JxV!|='=s2-toai!Gxz4*j\ K_`qЁcɶG;FjCFѬ`*T;V&>gΓ.uVtf~LŔߟb6q4qb8a(<$f\L0}( L8^8 TK*nӇ>ᕊϧx+ ݟ ڪ= 6mʼmJ+M:[~SuCCz @E1,ZH'1ƐCz;8w뢉YhWwuev*tÙ}Pcjʱ1on-+,Ż.nas`6n2p,s`fJ勊W~β/vyѥ1ZW`V=J mZ8MI @ja6R#kkɦ:TO !vqSG0cβ ¨Ba呏Z=~]CSs/hKf(/Luoxl|?0wo "i4&oEBQus\>zMkMi _cq\bQ% N{ Nj+wEuK9a~8Nxv ƄpԈb~w!wu8̌Z:i[mˆv;-gCs?7}55j9c -Nfܙ}V͢t wϓSx*ho\)CJ^ EeJ}v=SsUNbTk6lR0ҵ떅a)mG6@s֡nzY3sIl!v1b$$OFܨwj6i/*cFb { D-i( akme_|!G+j-^ND0 z@I)7R+")I 渎Á2tPG7\91fOOC$*P'3_Ymf_9[T֞?0$+AqPl"?]'Abqr*HJ].`_epׂUac3%RW8;>̣a7?-k7͇`,¾p5U3!sCǂH@1%~0(xIMhm&Ξ9eBQ軔F )5GLޓ(#+d7ycz2g\;z;Rᢣ uZe%NŖH軵 T f*@HP(`M,DvESXEc8CATCIx(1%$bD&BqhRb"FDaH4V(nҶn(Fe"TV׮n=?Ӽ;?ςķe_IymRImVl4=Nh iRf~n |`("!SDŽ&$!bRCaaA,4 R" xifҳ3xcټ]Ѵ['퍓}4gR©mraL4QcB8! 92H(i4H*Ғ+tx<'rtp }'% --h,$Ɣ--nK2[lz2UE˾Ǣef3",y{|ʤKǕO )#H,!$`'$4L2"0jJ?Nr{2DH!Us fbǧoqT ];VL^=R_Ii)TF=)Zq=˓]S?Zep\'`yLrLZZ{Q̊1.õ*.|R,[\QZqeN ]j?rv՞tW*)1dJ#T|Qɏ-3tR3*H?G'%41&l "OT}I%Lma \Ew(gH w rfTH4"D:ԄL(&M' pIh9DWq{18Ŵ()uku$푪lrifi6h\Af ULiS|{'†kUAt("D*.[U\cM> c0,̄,D J7= ݸT3tݪ"ވZv5h7T~(`crltL;$kbCq `\(MN[$A<K,@~eJ30>QIKf?ł#IR,qݚCpz|õި du GmUoXu^1s:XY&{)QYCqh(,\c 4 i"QH)*":#1eLq rK\6y샡x&1  PGV50ŒGZ$JQDm9F0"Vҙ8 A@G1t`c8#pB XŒYI%YiY]!jdeH@3 c@ cI# LT'%r  B8HL8{(oK44[<(f̓*S Ywrc&##Lhc-籬ȣ<1c~AQ{0dUK{T`UU\?;Ww̱33268Fz *__m|A+IvcA?z,8QnQGH+n-2*ռ;/5`a=o6[_'9F,=9|nT!YR۵G8e Y1n+qpe%dT7vf36t1N ]Lfu+-{WR۫}eH6uqFC'l:W!1^rXTaKIb[QWj˵{kKL`Yng`Vjyd<<,~ {kPHc+Sl`?h'AVjv}oxL4͐[YԖ[mqԖ4x(zeF&0w"|LL}%3+A9' {l.ƙU@V~! ܸtaRs AT*ib%vl^yܺP1ow.N6KE_8ټ5M$G#q.0!p<Ʉ)#PcLU D ZUd \;HԆ{[=a5©#rHalp">:益n{Oyh@j;ÈsNG#˱FIhLع *lHKaBZu(ֹu|*#- Ǿmk-`c|* bA%;Wf'!OD4,`!nR+!%G@k'(L4 TF֕498c_^РJEX;Mk>G1ny^' w!@膵"LӁiz8Vȏk@'򠻽Y7 vraRѹL=7k2ZY ݀*\tqthCk7~Z/I:jst-[hz@7x/yb|t1*y[sPt8SOz56] ]@mRY(GѿSwLȁ~cFDtZѷ> m3)CW;DGa:V&?z!H#g Ρe3cym\ۓ"7y LoN^3enQo|5_[77i*Jn[ OV,۶NҼL U^~6fRs5;ƻsnyZh0&wWة~z>?}<~8>~gxE'ޓxzx_7_f?9?44NO.n<{ڟfW"xYWy?Ej=ϨO]/ZJ>qZt2Ae׼Z>ch3`nxػ!ۗdȻ;Z PҼs7'2]lLJ%vntwA]U-cz1w󬁓!w2?f3L f#,'1?R ""y7/7ݯsUAfix"/6~?<~xzp@5~}q)0o;Vݪ:,nD{Sl9[gLn׋n0;Cy>_ B.|Ty~'Nx2{+nVkLBM!\ܷ@K؁9+j4/l&FV8:7Y8߳[g&i=mv[Mu^?L^wN9ߢ6Ⱥ=MGwOGt.{/Ղqe$}qv`ύZ0.М799i! ʞQ}2o:; e9GFe|ԽbI+ds=_?:1#S|!'?E4NUDaѫ]HO6Jbtct`҅[U@YV39J26̳jm-ui(۝gzjg~Z3H7qFvFz*`쯺þB5PܷwWu6ps\ӧ~mԧMlK~~ug4F;+:iÚ~6G㌵%NV_']Լ}X+t"CC6 7vRXF4L3:8q+Zml[[f5dIڅN,qZӜS/)]]_Bh{`R VlH=z_Mkj.^4RsbPd&XɥB<<\п1F~h%~m(MXwan]H{^?:}Yg6x;j~ಶa5M*CMuUypӕ+ݵ_0̙i>TlZƫX'ֆ><\znRaDYd5?^. nn?[ee =tހ6$C%-pu; ZNU&y㉶Ϛ冋_ߙ7.1~dZۿxōn~v=grI{F& #hCf·׎}iL߾yx-{)i 1f3 n9q͈QlŽ8|T9vy2V0^E,gmXvvg ѫwxyGY^S!΃feAfڎDxmW??|\-,1+=3A]L RBą$,J()ƾ1D]K0vݍ32 ѧՔ7{aA} gܦ]~wm$_! 6Hvc"b O1M*m~Np͚RttdeH*[2EvTgubٛ&K6T!Y() $)*ՌckZObZ޴1u\5$-7s]Mo+\l8_ F -npl>Mkx 7({CQm|)He`kmj x>Y f29)GOũ4Vŷ h/ vHFpZaz$6^9-J-AƒE ,ӹeC˾cw7 E }<)TP1G EέRȰ, Ƹ#aREb"D4><䪞)mj[˺SY!*8[eYG̽q`+`C1kR_.쓙I*ճٴi>%l|yq4;3\qJ/~Rab%s]}# [__5LK]f;ɟ4 &p~]7Ηf>n*GI:Sԓkv| NcF1ZleF͝QFGR2}`i2ˈQzŻw{wo/f1F7 IYA`݋izA^@xdiR%Ɩܩaro ȫ,H =ÿ\&C0!B*Caj3jX$"﵌FMͭM'z iήh6LRLus{h2ӑ\ (>&c57ܘn,̪k6tgY .1L+o$f?9 ~\L*x$]GX:_5Kl~Usn/mz}ϵW^:rL4E\*sj]I.nx3m,6R@r,Ape)< CQ!jB \VD`lI(/ D<8M`ঊdVDz1Jb ATVyc-N:zǢCAdXvwCEZR]sr9#K;sF*G"e 9ds.Bn=dx sZD)Z2li%%.Y\h!fpy|B˃s1;H~t>)Pʍ"VFud_1p!kZ ?D h|x^n3g`*7yԜP,f ~}NkCl#ܨ|F K泙>TNyQ4rRfpzyq4 ̂HKB9UaaR"N39Mnƃ.46}yZPͶ7;x+U4lV;2}(1h-R܀rQ>RL)Q[M#2#XP1g&{b2@޾ݞ獎zgE;Qaz7u{J3?Y7E#&OxN0+D{1EJGt`ZD"rRbϬY1)f5;$ D9@{Ҙ>jNX710BBT+: $S, A9,a#ֱ>0ֱ>vH4MګWWӾ^y3:>kNuכi<*>289Kf,5Z)$OesrًOԻJiRٟd&az׽Geĭ<2b>^+kj,c05%a4UTsc%bDZY[MMQBUMaF٧z!2e'"(Ҍ"^_ܥsrcӣaE͸ZK~_T$R\.h烡$!Ni*̊df@{eAktPR@^Dϊ_?50^ `_,Κ^tnX,kopY̆F?w˲rY|H_.o*ʹ_n8 `0]lݚit4+*VM1`%%@K\ Pp"68uIL2M+).0E_IA)LޝOg`K?.}J`XBӠBj8 Jm ܇F^?$eTIoä di/lnLe#*ayo[o:j5S~yјu} >XJ媸ja*r %Cx 㓹\$I|:rKLHTP.^/6,duuE -t[}U=;c+wL#DSRHm'KD, Vjm#)Jҩx0x 6;ᵎvj#okm0[iN#KDi1dx{YH?71'OC)cCXS:6t`)<59<f`3k^;I &uD%ԖhQj#:yG0&sSѡ'"HN̼:瀭1T)bGjd~sVk/C0[,1Vkqj$K[z-Lwv: Iv)=G6oƝ(s8'\a:G%94htu7? To_q]C[ J1  5P ڋ2E.p6Eu)ғY & DvMAnFhD9 . we-Uߕ=|,Mv#U15n9Zنlwd{pZIdwM๡קId+4IӺ/"jRJx~\4n:cBr1_,H28XpDU$y6NoG:FϽmv'S[MZzk[goWvvh89$VUʬ)1ЋYrHUE2C&F(8~G֭-,&e~]%B'Iv I5\C9: E݄ BiTeHl<X?* ϥD@HWď@S:汁9$-&^&JzI\錋9| NhvY>;ݡdHlEk _ib51mYNDu#Q"ƣ\nrnQJT*]kIΕɚkwTDϣSD־⯧~$`: C.*12Oh<:yEFD|-1% *6qud2Ů=2F{rzX܊>TS ځ-Vֻyud%%jI UJTS;\ݧ P:#W^{\2,WJFns0ңRUKs)yjR]Ǫ&MћunBF{e|?^^GE^ۉ?&~XRF} W3WHBGuڍEh:K&0{$v܍bjQiMDXC&TF(sۀ]jbQZi<5ƃ̼º^jdԞs:16Gؾ߽4fz)A.Iz$}gn[s"} F3ݻ.Wؾg'rzNiӺ$Hfe!8}o[TH$=6@!P3<ۨ yYah|T¦o\on}fd4Wy&٧rk/gks]cWo෷.6y9YA#bo6k( ϶wh`vbf. Zal_܅6-jij6hޏ{0SH^ V7$֘~ IyXE;4 e-$6= hS1K}8m=1CH]f[wH#uʑgwYQj' Ԣ VQ\d>R=xy6pfdg`Yi ?]3g u=mZuء{y~]8?y- oe[϶u*U v8>ROvmS]= +΢ ψ\Uokʚ]-d@uyδ7</oe]sHS* o4 e$44LTTbCD{ $"AA$B3d.1CԸYT=UN"釿f1Aw} 5%ʗ&,赢Vvypfs,ޟ]wۂ<}OuӈMR{se~Z_ӊ2Nme/ '^:_Cva鵐eѲ=j՜GF<~/ȭshmи8}kM_'gJAfA՝X :V/ wYlԝ5__h[Ԋ_>>{|yydRcyNGrTGo>*>oK;!klTCƩ CF.O/ӵ~[q#8;.o@Jߴ1$yE9s'p1)$mۭT5`d o5az%/}NFn4Ж3X L+~ތ[E$Puz5I~eԧQҍO6 ʏ|B+7= 7 e*B)JG 3QOQr`pok2_+9}nN2IUJYΆhNi}h,b瞊8 7GU b=23(P?"*s[\cq,taGj'iB3?PRn!Cpۀgz70wuv{o/.(SW?8އZm9O?#Wgo!ߋN.Ǖ;(דF@Q\&f -a@!wm~&C|ĵZB'VE)' ?]Hmw Y*/_g,vp>IxmƑA_ 6BF_ WaH~VeHTkYA?&PXGvŠmgB˦@X+sz}aT4Iǿ %5/͕K)/˕Du(,l +%]@u"0[M*)h4`fX4*E5̶tUD|7eou}!wW`q^zzJRVPxϾ=dh|iP)FYEvF>Mt"h:&JpT{"I?,xx'Q5j˛Q'?Y)I;NԚ6*=H% gKy H+py$؎9U8 T,^owL*L ullOڢZ6^h^^d5IbLyU'ɓH~wNt`E0r|Jxt(2+NE/ REj Ԩt9 WTH·b9i&ҨC P:=IՇS$x;N,zĭk9`}NLˏف/tGvcԌz);{sƴCq~Pg5utF=t3zKe eCg2T_g L*Ϲ8_6D jW4'a\n,ŧ^ٞz?5%Chx~K/:Xx2yKF /dcTV1PGFTqAd YDyOrYӴpdɞ_/v D'sl2Fx `]3\XsRV|7ԑbe _tp}]u+DUWP6| +Ϝ 9]|Q]ϜSUj!?d!YWC4 k!ʂ{JW+&z|j:Z2]MWW5ZJt5Jfׅt*'d%+ 4tpUi J%֝ +iy []!\EBWVt(jlGWXGBZ z QRU>e20-]\FHY 2t(jJpD4tpV1(!ZDtUy֮J]!Z;]!JY n"]IkJDWXЕ~ Q]m"])0^&ckRe+D˼u+DU;HW%+,( ]!ܕ ko "J*Е}^t6+Vjlо18Jft+멯JC`KWxZEWr Q +&RDt5+{e+D֞dx]m ]y_04tpCWvه+@he n"]q)%+{4tp. ]!Z֝EDj%t%J0++JCWVyNW#vt+IZ"DY ъ+DY-o$]I"xaWoX׶v ql :Q& zD& e '42BUfUL.b38m#KʕA`&Sʬk|LmEYW_0Hei )KѩP$@_?4糳ڛeg6>]{%j 0m2/d j:C)Ԧhj4dc2cr)ͪ0n;4(Y{ɱ6H )ȧZr:AfRJt+QD< LqӀY2tjՠtNcʠѲ@w7Z^@˳DuδⱄT7y+f9C5f6[DFD6C8VJZj[ъw+=MW_݊kRWnEh)!Wv+"hGj(Tݼ[VzF~.lԘXۘ-cE|3!jZ#4$n{}o6; ]9 bZ4cɷZt5/`)unp}>/ZcQuQJWZDXsT*͡W4"Z֝"J6thɈn]`hc*ō^%ztE#t敫+0i ]Elʆ]qD0A AWUD+ɺUD6HW1q*nP=• FZ;]J醮]]VD]Yf$OGsh:>v+gÍ^ww_ny0i~qМƹo03ǻ]KJm  E9DŽRn&xqne&ˠZ)2bьl?A|ݫ0hTn' η:p>5џR޼w}|ūO{}_M{ZN~>H%~ sU~e%>}^?}7[~Cۿ Y7{+o_ :幷8lxS ޅ="gE2ځۻ0#`(/.νř [FIJϓ~`t<.?2/;1 .>*;XN(%c(A'??jz6QqM偳xZNj?otۑRtڧx+J]\3C[w{栤Jٽ--CqܾR5XAw/% W*ED(5<5 f2;Ա>TK^f?][+V!prʣH~< Ե_ђ.}@5ag|ZQ?~VbrTl rTk~ |n0EJ)3ȦX >p2<x滏g*; ĂSFK)uIDǏe`dHB8A$rqf{o,TR2=FY; Zdʽ3ø)aqnWf!p;LRВw I{%'FaIO>Ipdp8,8tQSI^$E՗h"%16J~q$%d2=n7Uy7JI=ܙOg> >i2Gg֢w~zx;#~x';;E|#a4 9pIGgηh0tq)^Gu':F 2%{wmENXԏ_8_rߛs-pN [RsIE"Kn$ C%%%_ trHHKI(ՇS\K>&7$.y̝e Y`c޷—zU+6{LZXN-G[I g;2mMj(fTd}lߗO]IEi&['* W8%(kZ.7ޜ-6Cw9Eoܷ/%,nx Q!iP;fI{G6%ъ^QJI{?ِc&d 7ոUDK]nfCy Ϩz0Ո;]{UN|(ɋ+EUB40C1t%kOW%eztEԜ4"`CWn ]7dUD7tDIAVu+Fh]ER4"oRth1VNWf0 *@t k"CW.MVur&2lЕ Px*]E5"ڻޞfy ]=@"o{;Rt힔6g;CNX1Tq~I_o޵Zً{ Um1qC}n:A49^J9t1XZ{lvwVv BxMR>k9΍-s;kӌ2|%gS4a/cS[Uta,RUe,s X@bcwlfb J):-Rm|ɼ#ɥ4kFKhvhg)np0!Zn5'1bePKł #`ќlns^筀Rnr(C 70nUDUD)Ć ]i*jlXո1[#Z.֝"^j:]UOvCt5`y[烫M̹brO|(f#sª3$D 47]Z֞ztpm>`'=:v*\Tk轓EsxH):;vim*hSfJ!, ҨN0k•tq/N\4.U^ٓʄPքT`bf tP[B{5s'2_WFuMʓ'Uܥ,zǢRMV 2E5;aD')Y2؛R)[gZ&KޑGAub%ϷF+$y$j*g;`T(+-XkxWgOhAOJOvꍗet%"kf=3\1$c㟭𓤕\>VO$tlMB<{}O;4c>V~m.;rG짊ÐdfT_>o^t;PiBAe! y}lo5,@[n}Lg 5 _Y-bJqUzjT.axB\"g cobalYP|ͼޘ!3b&D6'&n0zɼJ]a"[-.fU_WǞ"nڳ|e{ӫ;M^hnML)7Yev%oK:z;pTzQ(~k B.wwyqq*[Y*ʟJt3R,2o_˔̜Τ+wO Y4R B b[FF-HG9qr8L ,+.L;(H4A=;oNW9,QgB&9s(@W mQP&\5a$_mRGq*i3"hv 62(>:t:}&<PH搙VВ*"at)@Q)_2c0F)µDX/!y6lALW/:U yiv CFPD@hBɨak ƴJVS"]@+,F8#ЅX zoqv&H/-F:4Vq&4\%X&--<1(;( b )XՒ-"CviDJ@A$ sVGQBe HBAb1i`ːul#.-oQ3̃'(8޵Ƒ[ٿ0~,v'Y`>$d!,e+u$V?Ԓe=Jvu[f]!}K $_ j0K*}Ƣ\b ~Hc;p ;[RaQ1.#Mpe/8 qC  A*dn4~-XX*!5V^X@c|fP57] ~X0`HS@ C9DB`PgEHcgL猱Y,[rZm<t7C3a+?%+ɦsL vfM0,Q7֢(9ltΎZ@*roSBFl`q seĸ"+x3h^ 4CBZ'RPS"!J>r-B".5ud"z_\\}bLc5QSkZAqBi5 QV+-Hb0{0 {x"? \ I །p|Au+k7/Ef:48e M^dvNR"&Dh6hZfئ1qɧi&Iu nXA].yBja$5W .ፍ`Æ4dYt4ɕ.$ @˅ɇY.e`j1h6F0= RFR{ЋA$ !B.i 6tkug;5T lcJAmAns)!5 ڭIn89.Az-| l/->c &k:iZcBP΀@&e#Kصo,  3%)@~Ѓ\@ 8 wGzgUAWCAiXDH[~ $c}:c&[5v%ۀc%͐ڲhJ& h% r"-;PjJ1ꭈ k#ƂZF7h[A QyW4(a bȖZ,6mkڹ;ҍZ-hr9U3"XAEBjЁF"J[Ƞ r@=-Zd}F=J-P@>.';؊D4cA\(\u{9,BgxyA aP(EiQFa6m5y%#1; ?yUXWQŠA19Uk0ȱz`JQL>*3^&@&c@-\5gB'n[i8?TJBT4`joX 0%֮Hj*-2m@ ZxAX)©tltAjaS/DXpY2'AsOV(d -M\ГFn5"qކ`ݠ>JjVKC$1!ѱ0K>Zq2~B hF LB$C:o+e ]wmt o`j?ʯzBj-n;-KaP&ڷzի@imcrk`$+mWW3ҿ__l6(btI) &//A^WtzJ+lvzQ:Ҝ:],~>: ZMŬx9a6_7TH|AeŷR̩r|r~'i͋vZV2X}ۙ995T3sؙP'  uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:+ԉ.3&Nu`ڭCUAq5 uPG:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBC`#G$ԁu\"ԁ.!+bs(P:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXsBcTcZHsPG 9:Q+ uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:#x;'W{ɋWSZ JuW0re\/Gh4 21&]QR`2Z^WdRuEAWt뭵 . .N+ gSrd*pl+݇/RMr>'iRh&W'?\ݶ7qӤhƤ0Ů 4Q=|V(@bc~SFCd]DEB}qn֥#""'xL/! L4W{)oen4U$J5MAEZy^sm,77Y\v[vg><,\ݦ> fe eOۍ(_sZ*wr{e_+9 釮է]s +aЊk2"/<]P|Mܺ;ly+C/Cy77^іiEO%hRߞ/lކ@_}w)gRݿXiG?'0dAnBtrŀVMR 'o&gFZy$,q_6>ZV8.^ƭ;OW▘w(aLׯ9 г 9N 9=5{PCAVz}R0 ;WpO_F/]X<+ D7V͈&CBLFe2?JVn0ϓg2Te;"" \^`ڹtcJj&sh^5෰$+c`zpe֣+l \r4슬UኬԖ•)8""G ƱYԡY=+ocjx+O+F JKe _gZwu?+M{fzhK/\`3""s \:\6pYJy\`h5a,pE:wpEVp J{g`GWd7c+6C+XmQF1"]1] \Uƣ j%L'3+zDp_p?s2{Y8t"+gW.hSMk7v+Q#Lbmý +B5E&LiZه͎49Ma~>%uqo$Oftݯ^uvy>Tl$Di߽Hcxܶ=*5a+NXBVm?O3;f2{r~VG|BRnQIw1L&N#KͲ`[clll^neO\6|L-Q<#z#r˶n4]cMk1j} LpMwQ]E7ӹoBq Ugۋzԣ[tnX;:qb>߬hrjm8Of]__MmxuIٜOҦeWumFmj!:򀓾䫒yN?^7I.7aVzgi#uwk?/tzZë'z_.OWYBp1k+7돮'HZ1W-KtqY7ݻطmn |dfr9]K_N::ԏno_iտcϦ⻆f-?/n6e|uv)2~i鿗XH\mn]5v /eVW$MLdM'Xm9I34?W>ObB-@7=Ąy1wDv~_M,꿳܍߰uifSxeV]kkӎNY~K^,Gewqof38+ B3?]ʜ>~=bKqz 4)]oOtr*U6Ȩ GR#)7ZumU .%.~'ۜ<d Bwmܒ =4E+}&g=]ߣߝc66$pKuZg'2*Đ' ?/ԏ*՝ksٲ]͖mjknlQ:[ađxv1:s͎w^m]Lvh@j I7VqR(jhփ{1{WG<5DM@g7vk',%jlR`1ULY1y$Y;x{[wy>h!/8q4{7&-q.>\}`*X}ThS&맡w[>01sM_KCe;w,|*G'wYΉ1eױFb*!LocY\%LbL6S*tׁ-vt6#"X,>-0ƀm9N_Qۑl%;Q8EQd=ŧ"Z+I8c?% 8c 礍{ x,1fcqݶN:kfbKzgb`kΖVBمl…L dH8!JR[4kԣԾio LF4B y$--wLZn>h&)w$n.q`Xk_n]VoGiجc «7>֗wN̈́kbbOeKG5Ě̺&܎&wtP]jx nkL+E%&(Hxr:XVh*M֩}}=} x 7âapq҃o9^G⧷}e~-7de9 [47?YBC" !7ll&! E+HaN{yJ[*A@dw3:qs~lgtQ[QwA\^c$ñOc!XewV Epl6;65nzG^AѩISv`j6"&z75ai94tə1*}FC,3@ >yT?XC4OmKטѧTsղY][c@n>ʧIG!d90*(Fl=X6)gEF#I %%c88/LJ* Z]JAY&D4ǙeFHѷ^9Yfҡ r7e[r~b3Ld_{8QĤLeXµn)S]_2/+J⪢ HRHGk!&M.%(&RN;nO5Q*!2lA JZ-ӑr8H+ JB  DHh# %9bHETB@Ѵg֜8+q llCPS( ` _B@WJKeSD8&,Q+ϸUK0Œi2@Oܾu=Ee(1Vː kQʛ(04J#5,@Zn1NZcw)BT>̙`0L;̒Em#( r߂V^Ѻ`הEdb#1jJ~ד J[Yx?2Mˢ9t\Gi'(bud2=\2 7?f MD |a4;r M)@R4b`F Vy<NJn:;t&!OE=:Ao4:3>LP$/vۮ6\bm9v4׋_.m)DY$ ͱ 1{6 Jb:?ޛp=}=DHZ厛xn4yZ BА~e$H_#aI$3m6ؙJ źɦaبY\»[(_4;NcF6`(3*h2:ڀmT 5Kl@Tǽ"Yfk'+o|K-Ei1#dWe؀0.HrS97. i F`#A*?S^ 6 AHRVa S띱Vc&ye4zl"Rj4"fU[-Îf$En0x¯z0ν9-r͙A`Kr+\㼫IEOmmS fWGϰ&xS0))Pʍ"VFud)z+ Ƹ#aREb"D4>>KR[hF)kT_B)t3UhC"A+0P.8_MPeI#'enG0 }56''ӯ6ۂ8[vn^韤h{#m:L> Fh+KyHLiϞ201U猊20t#ΝFkl[ =&]W}ėN\8x!jBB묖ƀSVO#L;Ť4XFgp4NrRJ (WKE@(qZFeuZb"Gg}oO9Vr~L{Żh3OY'IG*VtU}"| :9 2,䘟hXqSq E[o++'`oOʮhp-p-źݺ\d8_eW·h7֌G '7d%Od$U ^rBiChqOے^HLvؔF"r6CgRP;3EX\z}:-+a NvkrPRJ`FZ^O:<,>FoSSM 63SGeRXk;#աTznX1ࣵVuJqWJ~1vV\"-b×2p$ܜ;'t>(e(yye &=ĴVT]sGpvrM=nPΉihjV MD`e5Jmd+" 3bbޒaߥwIH݂H 8cĩ(##3_=`cpH?I:w'Hz` !!Zy0]x ȕ;*^ skG|IQN)93Z9˜u~} AByiu^5*Ĉt@Hm'g)O=3Q xa -v/netn]IĨhUe<]kKU'um- Pd;" ?OJq6~%:+y9ӁIyоU?0!yȉ' z)D^ ѮFt"Wcx\{u ?{(CZ3pfn1I錯Jf豙A7P8OrY(#C^E6́ne6FM@I$9kŧ{/҅anb eniKIN"_x ^O}{y-H\HyU*CRk.(|8}]`DZʩR`PZH>uF_6<.*X" (#`rK%0?RhO j0D|/~+ЀH;I -6W;f, WxU܌ӯQt@V42habD%$@tY$) Ӂi]mo{ز==0̲oa iU3`0b L> 'H kCڰ{Diz9!7g؀710BB2 3uS/ H[XA`Ir,a#ֱ>06 m@!v;(z<$'~R߲įZ_vuXGnu~#սF؜(s#eDCsq}5o+Œɛl6&?~&ѼZZm3->>^+kaNy 0GM,+&Fr[}q&9qq>r :|GCΆ.B;Xl18xnz W& * <@~Y)[ {VJme!d&"dw]λX V?B{?Gzn'*}}$=5[*OtOi_ezZ0f{AUljqȇw|(CzrW>B}8xhĩ(31x^~UJч䨔D% hY_[??>>ԕv{/ Flu`\ :Oe)˟0QgJKk-t8h.SŏgYtlmж -iv{*uTػ|& ݧ)M-Um#K&I%Afד8(+Qӧ1E2$%[QHQmn*"h{K=ϼ|5Jv9rpx[U:힧95g6ear4OՁU<~6<͚횬!T[>2粷|l4j)*٬M:w LS/| oUm,V.m=~:gE䜰qO[3̖i~Ʋvƣlsl9[ڪΜg{SIMg?فZ`87ӭ~_ w60k?n?zXfAKk/utNoןN붑&!w=uA'yΙ>dt6` {޼vv}ֳwYT~wAQvXW>;_]N>~0:w~{+"oSЁq҆ӷǰ2hNa; n48|^<~h~,Cﺡ œf `t_c ]g< ng,~l;5FKk^L N۞enӏᏣ(?w!<۷iQng :)FUMVmampC{!I:F|)v6=Spv2̄澞G@oOmisn?;yW{h04Mއo a?.&/U0@s|lY\a scO~6ǜ/^2$71߷3O1tHn o?-L?4u3M%3Oz^םtEʟ8 H, N1Cc=Ԡ_;}}2Uw~{{ A <<f\Q7!&Sƨxq0tdNiw0fFؙ^_7G~48a]Aۯݾ .:Xe |&p_ZΟ]h2g]A[sSȐXO~O9{!83K4r39r=o7qڀM O}iQþEQ_ U`hm)1H4_u/A΢۾y  aYI㬟<aqgGS[G%{to>HFc*[9ƣZ&n*',v!Tx%(H,U}+r}RᥫqzlWW/,vV*~}\|nvuMQ{T+du9#۽kJaf32 c0V_U{2>trQ/b"Rq 1N!)8bBѼwp .8mBSq 1N!)8b`KI9BSq 1N!)8b>&ǥq!)- 1N!)8/[q 1NѱBQqM+8b_1!U!) 1N!)8bBSq 1N!D o<¸ &kF.NNY#5r:xgiX;\Y!5:=Ԋs%snԭ`!jB;e1e PX2gp4Nra x (WKaF(pZFe)A@},nF~uBO皐~fM@9O%Fg= __.SWcH3~4ٷLw_]}sKpu쮸-F4GtfCN1e:܋` OX1*Dvz@%ypDGɘE(WK 9Y dcm7\fH;"E Rr \'Tfi%1< Lhd\wbr7Ǧ(~6HSuNY#aItBJ"FXp2ɮR`98nq\xc$hpk$"DS"5]G 02^I:EERc.FH#N]Rb#Sgޱ(miUa%v\(D)y9b$>hiXL\@>{\S*TKՈ~fq;#մr^(W?SDTخơqr0 Mͭ8,L ;:NidZg~:]^W<]Q\3nN5 >E *u2}_ 'lrr8lp;)!"(b8;.&?s,4̑@g6HrZ*7y33<⯋# t#ڣn7i"(shK 0ֻ2ԇ6 xh\ =~or[%Y/sW}~5]c7^$SMzƓyYTo8ҝMK3›aw?)hY ӫJǦppJ'U<]&7&{IMOw(PCѾ %1]HbeVSY8"aHA)nnjNv;v(4i0`MHq0RӕTMןH(K^bސ&+Y6Jj34#MEsN uZ-j#1C+&۝+a]ai$Pj!=%=얜 A/RKQ!~jAl{]qLHdUhvg7p72Eڙ}ИU{gom_q6.ŤF;\ʨ'K#DSRU,,pRk@/(Jwv3b .}Y{AxRo}L9h 0#MDi1d%T^=Fɣo$>-wecMĹ];u֡io*X7ޖEl_{h.:pEgV2,\tiXJ p.njWs~0bR5c6(S[8SSAY'{p`Fz>wyw}_ø+f=_Vط戲TF\0sM?%bHE֒)Ej /`ZL_d )+Տr>m"1{Hj+-g9 ) PotáZνsMpJ{&>,Yydcm"C")qJ7D^(aH333& 7>^`G=mxฟ̭5V+:]8}՟u(6ېYP%i]Pؤ2?$(%~\?2gcRA/L#Ƕ}Զ,Usr]W`$%.:`zA ][nu O7A@H{_VKq IUp;oB!gh՞:#HbբZ+I/q3%#H-XݔB$هsi݊iw%.nf B di-G 1ZC^]imTs\YGCoӘG"@'6ʌ ;6`#* wжل.-5fEQ`WEf}V?+»E.1J<R< /{F}>P] d G ?{_ِI˖$$׵Is4GV-K8~gv%^ &!ݝZzϵR4*/{U)iٶt9l|צ'Tv8|`(No v&gef7 E`0Us?=8i1cscɉe;O'Ff hw[Oeyhjl2RRNӃ4 ? EQH[S@k9ur,7ihdt .1CE>9F}f11jWW,Q7G?#>/,;Q;uOeShD/-W,L× *ٺiI\\ѩν rAW+ݝZ] +c4C Bn(qr(XC|)ߤ VںHxO!zx$s뵼XzJ#P WhIh7&O..ng*6F6]-[V.W/ '|曕䛥Q3Gkfxgk|W^L%n̙ZXx0w`| d3Gp1׌sȳ^sƌf%3E/0{<ƐkjUCHͤRm6\]C>:|*IZ҉ x^2TVImq*o31P#2\#-`65D(o(kE՘ m.0hM_.dL@T}3e]bfYԧEX'kKJnMV2tsY%yFJB;RE)gJ]Sg,i#9e#IfYQuY#]ElvYn~6"HʖC1+Kr!ke{Ay\U*sfw!++ vT3u[*ueKrtvUTBUL}*S+Jg 3g.wc T&|i̵nw'AgXT5Ů8r_f_岹feMyl|AZ`ϣ @Nf˞/0g[ϟɍ#Vi3e1H61YVQMƻ,Q@ɾR7YWGmU\9 hdMSS G ؍`^S5*[zDn369ciu$N[;J|p{Q:~ljf~|~[kvώO۴ERwkp xwo6Qn[େ t0iqY}tJn)_n pr ? Wxb?rhРmg6ȡr6OF<,2S>Z٨,9d jkK4s:Qgc"DŽoy,z70aٱaD z\աU*xf.߻xf8=&ࣃogyz:# cryÓ8 ܛxu~iq[yN8qO+Ou>9#=б˳ nfSQ3XUM;韜,'E3g*5o:k|&j; .䒧O_gy$vL Hɳ8;7No86/ݴ5;d'c]\VRX 5 LeP5>Y^pղ)뙱5v2g*!N7o쫐jC`MEw^ϳWq$nw5ϪL~&y%Nqpmyvqd١Ƽ|DJn9-RDϿ Fb ELr4} 8WlQd^&w >(2\sh1!D(N&2<ʐYI>rG}1@~0{wr}X'Dqć% ?E@B6( 6P 弜.&ND+qg|2bg䚆YҮ&:8/`Z9hW'#C֨(N^oEcە%}.W=]ay}QF;;D4rXgzxvW7]t%(F?VQȼPeGįsCڡ!b|fq܋"WIi(>7B$i/V>^Q-hn.pbMps'[v`'/0+"a*j P/e٨3$m+ZX]ݼq_۟xxVgwCx5v hxZYjɵxٵ>;@}~~La ;L:}<4?7'/}LӺEROC$dNIBfEd9O2E\€+ <﹥3WߛSVߏ/7gL8Z yWdDJW\b\WV VNInYp}oS,콼;ojܞ :܃ٺJJ˘HRJ$޼yљzn~Cx`VLމ8$ErBgT%9U< q,$)ˎw꓋+<;mIAIƵIj_XfId:e7E|ey)8 GN)vHq1"<&$,mAW 1$ුG?܆T!R]| )> 4h@L^LVh`59=ay&GԺIqMpV= s9o7)F^A$^!P?E ?k'"{'`۞c)b GH ;M׽:tn+/~ Dj & ଝx] RHcnyacpgRAR_J/İX@#_l˫39E^+ʟgO>N >{I;6K04^ C̷cVl'zb•HeUhnr)_mT"q) kh:U(HOO9ƥgf'z_S7h(T_&GK!n본R!^㔢N-:-p0l'֍6\H@3<8ܐ m^ϟҜl`ɍYFg/`O11`DlnB29)$vIa|> `$bvwv[. xOIɼfQ0@Q9¶O"x$ q1ة{o ʸ<)n>w WL>#Qq+d$ȯy)|(Zzl7|McY@(g{ "On;1v6Bm^@zaw8Kۥ#U~ڥHRԻKYd8.,CH Ouw^o'{lqxc{NxB̦]# n x$ʂ!,y]LJix 5H;̤ySU*ڦ"O|qP#/QM,|r }BYUYZ0ª'5f) V_F74vN$$ :MUfsW"rNemNNvsvGqgnO8_a|#IIyg>_ghX "9$I`9IZeשl躤YTLEq%Elөߤ`\@,vSe?x/FIak yu~S~wR;~ x3Z(<%g*bJUa7w^+EJfR[=] ]7SD-nBVuUQn*b>녗))_қ, QR;G Ih!K`"ը `@ $>Њ!5_QSak,N[$oQ:(;WMt3WsbpIo3hzy>L6^u[\Vvv${|bَfdȶ*JnZRr4W+WtR+)fPjSjjʦc*)esmDZM gU|{;U-1*j:K$gNq|r]̏旾xKK~&~q=K_!FC]oGW>-2R]3%`O1MrIJyr$% ffOOu=~]]ur6ޢ"9;Xgn?~|Y2y>wy)Ldt2Ow)}ӤBa.:L*i68ΦRSKPXkˀt'ZeuOeݕ<*dYJ&9ɴgXIijy1f*M]ƳH1|ЀO*< HgJgd)'[C*YۨYOKxpVFU|\f1orT`MZ)utnN 2oO4Z1TDK q)V4?IphC,_ߥQ3ݮ&TƼ\eܸ%sW%q$^Q^ן/,o_+WNy燼 *ͼC䫻J*fu.%NF߽y=*b2NmH1^,GS4|5bv3N'su_?qpO)F@7}'7 `T27ªfp gpY bY]* f[Ƣc\g{S^̏ͻ)[^/{X8L00[-.D+V!j'=c~y LqY̘C:,Ncʠ^A>1U.үOǽT.>yMQB(eN{ɱ,ĤgZr9AfQ5&%R+eR1EYS6;˜(ܤieZ4-.w/d\&R;NV_|A%h cg6(Ss ARL^|yֵCSAkYQ0@R@YOrN!DUq Zh16Y̋/GДQ<)S&}w"qkEDd A]6uy}p0""ge (2#HTsDek$*%$W\9ʔPYR+3F{1Dls%89%o ʜ3EVn-"9}zkhnpE_}QFO_W>#}X(f;.ڦeRҒa}&}={eXq,_LvyZf>7%og\)z ۚ\?ao3O|o7P`/{_^O&T!GAo t}:L=<]'Q E5)4\OsSJ?J@: R_Ǟ͋7Y-B¦$ڭ~:=VB8; ZDl ha[0 ^XO>éoJ/Fg ӱiIM 4(kBbPEdv/׏K)]&kEZw.(r-1M뺷ՇyEA-n.L(.|sw)aOwPo~rζX"&v{e^R2uqqՖFw9gN ˗r+ `j+;zꝥi:lY+Aɇl~ET݁ j#s L~{R/U/G?T 'z 5޳շ-SG~hiBm3/BpI)ŘJ UxTm]Aſ9=(1=e{@- lصm4r$4(d U) \Gjd3HysQ 葇(da.0e{Ό2uwM0'SV%U{b0@qK3x2q2r 1LaڡŞŋ.TKKan,/֑N`KeHv3kw+q</jdQ"f-ȧF)fR룩3_FۿN;9kwꋹy~:8ݜ Tim/z- 0K?N]OZ SLaF^S: f#$C<[4l,)$TƂd#"[HVk*!]^O&O[s3G™cij5㠭aY+BC/O-?x_A|-:4V =qr%eflr `Idly ?6nfϹ.GX2oenJO4A0ɵ r B($<3@cWL#,8ZIČ,ۤߜ&")q RSAQ"k,# V=Ԡ;)wc.i"z3P"4D["$ MTpDRQ 6Pd4YzNG9ꏯ@̪`=B@@KHJ db|>J3nR0c "Ч|-_*z_H'pc-EQ:b!-Fs/UD)ofu E԰H(6jd?ǏEܧ8ERRP`3`0L;XȢwQ9 okыh hݰkL <5%?>}+H[(|w1Ʋh#u82WFhr ʣX0DL`L! ߊz!u)uS`HJjE$.x5h#ģ?Xf"q}C,9.^/aufri-GI7Oև rc;vŌK[cFr "E!9D(%A++eW:0r7lQ1"qk9nB"㙵di[r.AC.!:8*y &3~π,Á@f:l7AKuCMӰ@FzuUf l^K;m4*k 62( F刊`) NP2}R2$Yl<ޓH-E1' TLjտx=ۗ2/UbĻ 3#ʘ76Neܸ%s48y8y)rLG eZ30{-#cR)7 -;:k|6n~eJ\oυZd1ZVȇ<$b4uIow#ڦ麟Ӗ4/ .ͩ֩n^ϾH u7ڊvCÀ1)wLWC@J rƮ$Z_Fz k*1N&a 3߹097U9W|KfD9vB dj׈\nKbHx)-0J('bB.ZqNypM~tp.Ew@̈́6ۇ{Ze!_1j%׍8U<佬6Ge׺^ ~P\Т?6&5ljPZv}f07yQ״dݮ9.nzoL1NovPQI}M~&=ٷdZ93AuL(&TgRq<\3T/KrGZiRdj¼ZyI hlNLf \ ;q# &zp=C X"3VQJ$w8ta@SD(qN'ŵיL|DϞh9aWT2vf2pf$9Zv*vf06Ѿ=6}D˺Hk^ m9Ϊں\PQ`B蒤$U$M%u :|u٭@sJbPG߂6*b"$j1_[_ؽ]뾾I+_M Fi;Jְ=˹6Ѷhd[n $%ݓ&1<^94Xԧ?&kY%e-,:КV67S83Μ3P_9㍇4؅E~dgyӮ9OLߨs3)r7#s8;nԒtLF!ڱw72j%IE:O@h(p'p #v#*h$4K՚$WGxM=SDixYRO:~tg c3~i^wBWhD[Kx;ڜ}*飓UAe9o*dd8.i-cEZ3.1PzĄ.x4V"a쎄tgE=ѩI鎲_a-yOth\LL2Tc&cE~*(%TNCEԮ m^Sx~@ B~AFa3+6`6< laF.2½geJx%f6 J {^f|b,4,I]FR'S*5) 3A\ yq>&I@ uRhNѱcHǬ(-ڤ4Q "@Qp ͝"0D:'>Z4hZѴcjYk2g]=3:ǖ%~e`Z_5̦ð*a'ww?`x7A:oK6:qX2@t, GZۗXjw%]oы7l҇}RlKl0o!K{j -ml||h8kgap@)c3(#n>RPLBw[3'qhHu-nCyΥT#UH:RT#UH:R[b:!tD @!(tҁB:PH^-5!TB:Pf!(tҁB:PH @qYtҁb,[H @ѱM+VH @!(OUB:PH @!(tҁB:PH:0va,20b?kl\P2*k!RxwK߿8O g־}Ge-: }a[]}32yŽskt4^1w\N>T{1jPac0B -ȗmW|Ҡ[D '763s *Vt-ܜ!ʚ@k3hu>%l2 TT}z%CWnbv`_~:f7>$V<|'[tV\btjeT(OT;G)OlL#GbP%%f"*JN:S1|M@FMN6ݚ4%U L+`ZKܠ6T82X)ը35@ Şhs(sаLf4*ʺ? TEҚS'lWsxTѥ89)!P_%[]Ŀ"o55HRqC J\4gaʢ")FinPw,ZEG7T*A*C8g:-Z$7,N B$ö ~.&.~j$ŀߗ B-a:*\N@||YQɁ(T50Ju3s {+oۏTN'dꟋI?F( W5c_5+9(}ʍcuز7p9;+c+8\|snwDH~ܴaI*er3o?QqWOQuf8Gre{fLLwyE5}YymevQ:ջ]Moo|]#H|AuYW?$CWTc2M(fU΍/j{\B5]e5&yʎo$SFr[sGԠOoQB9ar1ЊsÅkfsl8(NdMv wMv7Wsx_k,K6F8@QuqյJf,Z׋~QU/ꀫZXG&ڊ՞/#$yzq׆9 GgsLFCf&D{1--^~%5l3f/]FjY.G}!EYfگ+jU6JVȍ10 ۧgv9zk6'+uflNg5>K7:CQؒ}}PPAU"ZWfg}q<;;[>#%ʵl2M㤙VՆ/uBkohRVooϷ`wn:'`w.w=}>Bs3bn7'?7nWH1U]6}jb==ŀ²cIͣTۏr=@;y,ŷZ_k.eu_8;ce˿yKd/0)cXǠx, c/_ML=.}h"TϨOJCqojop#ʶC`\E, >(PVsՄ,KR箲\`DlS=3}`Rj +\5k&ukz:퀂&^.̗nҕ8ѭnɾ3,UJՈ!ɑ}ݕ}u#ʡe||w9Ƣ ށNX`•j3= 1`ƯZ<~fXB~{ JȻ՛|⧹1Ӛ(6-(+.빡 M7^d {,+rvYROimnS$<.r/`%ԻO;AC&/P1ͯv Ov> Yd,+woJj&fҊRb1T)図OS8 ?^Lӳћq;#?.\ v:x NVWd~œQO'l#a/@jdlk4:hhn|~:Y9[@m1%uFv[,?jGKD}]irq:쨃y{MK}/h 3zA)KT٨>p~~ bPBSD,5(j«yu/\hL&65dLES4ME5| |pS3+&-k3t|*Mp(̽fVHu!feU2Yy9!~#sMP("`><#1Yt`~g&u?B_-fZiJ] AunekGy:"{1Zjm5X5Cહv:i ۯ~!_Qo:}[;m:Gj$Mk1tF7OjвًYgt#'On.Ou1&{t 4~ F$ڝZ8>֋N[r:QmtlqҰEu aFw# l;Ȑ܍a93g4BK!u0猎#=ĬH{|%j] b ch1FЏUR'VHuN#7@z(*۴SP|ɲK06~Q8˓~pߜ4o\ׯg-=r+OL3٪Ô7d9=nN^,9OA_2r{Ⱦ]nĵoH^oq R<}$?wΧ$"AUkfT/mZ~x2iA)T,qo(}5۠[վ&7qQW^y~Yd疱;-:7f-d4bO¿[k&R#_Ǜ2}/.{]V 'V+yaW\;i[D| ܣeoF\q5SmuqKVCC[>1%dc0,=q4p7iWcs;|:3럛҅m{w.{Aʖ̵+D:lJVj&5Zm=ב{Idmnw7JkH?T}׬] 3Hn>_czfαT=OmVh +/֨{oJpg!^VzR Ԑ2"P )H`I$x` RS.tH$ 9غ4/Fc]M[1$Xa:3㝐V4EK f0hx>@]0,g>sHbRFvIm2*VNkiG@!%[S]rgp 1k!N-8p;-s?Jc L<X|USJ|Ƞ/C$@k#J]XbKYEʸ0hȊ$wEc!Q3-e2 CP;2eYlR =ePmʤ|N`PMN0|">cq_fHc [E%d x*' ϊH8 dR/hmђXZE!Y̸E0MM2 Yx$^c Vev LI;:D͠j`o]6ǐ?31d aͷ! J ȂA B\*HE""tk@A{EGaB02?%( c"e21ࠤ`g,%mc6ԠQy$ %- HA+0H]LAY]cXd$@f'" AGA" D--;䍆""DIBy;( 6<| p9,+ )b%IQPRϲta,+,3WN㙁j$)8,JyQ6R$?.)C@;h,أ1ӍfhUfQ")R")J*J`[bUqȌƭ ؽ!X^5- 3C"!|^gxvA/24Ɛ%ktmhLX {ynu:FfX/߳jid\6`d4 @][pf0z`2oAltF 4l,E16@ua I^kp:U  F;qCSFF~}6{gp b7PJ^,9 <@ACK6\CmeJ ^.AeOP T"^Z`O rmz0hM`2O m"r\sqr)ol)0 0N;B`X HbQ( F|R5z׳l@> 1h "eYq2BOK5R&v;b^_8u$Ǻ2Q˃&}*H#jJҸh O`e oY#9~נ\k%N7/u(aT>0"m00PX߫̌pYV2ChC_=AkO#`J0#X5Ƈ(^c>jmniӫ̆b`bR籉` .12 fVD6Y "`Cdn Ē};h 4 4 kHTvƂE8Øܡ`;OD.7N&5!|ٛTD w*0 "Rں} :K]UU9As]?u9_17y5Z~Ơ '_ɿ#|2P9q<\wwZqf2*A6U]ftTN`ɜc\Wseg/JT5i*Eu^Qlz5_/߼_2d0/ꌐ, jr(qnn<yt÷yN\2d40h> ෹/2jWJ+<}x+yU_|كkDXY6,1{ڔZ5QJo*uyзPʾdfbf%3]N'7a:k[LU+f鄨󰊺 ,'gy ˕ZU9 ſͷpE^韗DMlbtx21*wh/"@L; w0//2`0NT!$cQ;jd֟&'Ya!6bcx`A|nxA "G|Mݏ$N >gq?}9vC=wܶMB_ۇZowG[YRYKbWx01|>+_ Lm[Oz48?_&ףz"Z9e<}>}_q5@4X6.z1z<Wwsxy:VVΌ!me$sY)+x\}ӽAyg6j7&E Si+Cq[p˸0< Lx%u &tchﳲZnedt}Y*Oav|q9U٪{Iqxj@/֦ƣn` e򜧍-ʻy ) OiP3RZæ.R* &?O?baZSxAMX6};]LF?\. /SbFJ+L”dk|ؤaXfܞq63h__h:@ A5%:,,܄~!p`7qZ56 3[ }XtǟJSyq;œp|Uwך\"t{SJ^vj+hXCr3^z;/{wT!wp'47]8%R,WF*ȊbՊ&Gp&WtnAAђ az[`EFxE=SxGio2xʽN5~1y;GG#}OX{,}0_*[{j2ܦs:@fm6|k#yf!SjR>}yZ-~kOZ6ƽM.bh]tc&FnѰ VjȂh l϶`ʜEyS#,Yׇ { /iX ): bp\aaY,(S.,QK|-(!|b5Vsа9cc_Q:e}444퐚vvXD6?&Ѩ(VW~~nﱕL[7Bj7t=(q}m^osG(tEOݬJW'r{[G?.o G^0v=;.7Ͱk!hxFPs|}w1xԊP cdB'.V:D˔ 5d29\C7[#w.AO#И}]"{^^Lx8V])i6N v$_% x_wH`[3qfsw@͟עge{(לԖs+]^[qi:&`R, JZmt`\9[m@asrZ ):$>J޸9{B5V<hy=4+}^a.\3PJlIn@Xg _\~|Aw+#sZhr~eZ֞,t[u6Xag=U[=/UZ95uEYɢNZaꘃdЮbVmr OLÌړc:rM(6qVg!;  A<16Xx8@\5@A`rX 3pR3rm p(%OANo[!0v͆E Qׇ hV?\*5—@ UOtF 7x6hnuNAMWE%LIy[afӈ|/¯?wMo+)Lmp.h=/V{@/~ Xf\V$y+.לaWc5Yo1h̶%ÞNULݓp:.4*Zfe`9Fk,>[&*ZAOI*9<~#gO)ED¥-p[j1{8>kKA`ppδ ,PEV{W&hu̮^([%k;[zЕOq=+%⹫9$W풦6244l(aS&cv|¾综8aa(m%(CF?;=vM:1Ce3!Na0x-cJR>d#cBt_x c8\h4=%m)_+XGXqkI-@H(gsa[=^CGydmnזԫ<:F6] C"  ŏGAdxZ^ö 6#1q MĚ CnOۡKqX+"y]d[TPUek`x"uJ읟b[=usu>sy1s|xS(Ǭ9&LPx-XFֽ솘ε-ѩ =8,Oҧ!f Ȕ{ys۞wӋaPXW͠)&,Z0{OPi[BO>2i @$R\U,.T(.VrAPKRJ>=Ū68`<(er",E 01颓Iں7R%J>X7g ^XQ7c;̩Ɩg}b~ͥحjnu^6υwQrOM`ͅD Ut?|j*澱}]-0A'ܟۖo;& @ /qHb>%Kr ,AovdI+ʞ_5EɲFhe6[lz!$C3xN"T`!@w*IF;Z}P>$L3Qk\,I#I&Fmrg68;\T|*9#_JH0FAIĂc o'%'LR2$d!AH ²}B;''I{꘏Ƞ3̘ Th:Y"%"1NaZQ-v`'m%<9|\3FJD^dɺJ#d#[ Aq8N*oo2FRSMe'K{'lTҀ4W\&{2䄢/s! ֟Gǖ瓞p/ Dש&-=xC$brOQsȣhXfܚ@nGXOgw$=]VBnڃIY7)&o!m~M$r* a ¨XkWW [IngCĈH̃b)sʼT-H] q⇮i1Zodrqh4)$W5R񠸠',n%mL0ǃR/2N r H(O @"gVp:V;@2~{#1qu>nA|]]30,GŸl9t7av%g!T4x.S,AGʲX+X+29(D.`&K3"LIveo9< x>.Rj OԤgh&]MQ[y{mqh ήox|_t#sY$u :Q J4L)GT ɦ!D,:3j:WJ&fqCcY*D,N@!!a`S9zfVj%3-k\!# hQUx]9\Ytݬݻuxq!u{Vt&I9y[{d3)ӽغq:'ՁNmJͺClYb˲ޯzo|qѡ獖!ہ?sGmo7fVv%^x=k\=i Ζ<6iƛ_r c/_v`6%GT5li2Bl[f9QT$˗ -u e~`GD<4},pn~Ix. h>#*MO//ڋ!Շ`*ZU(&.?Ûsqm[z>_D.Oپ4׹|4[Oz[|@=T;l\P_uKfC 6cDo,O6+16MMqƨdTMf:ۘLk<ԢZJ꣚U}Xꒋ?7Ɋ"[]-fegq3T˗u_W?K5p3NzmS'f8+ nz3_-yxpcepﻉq:/xj.#Twy(:_?\|p9M\zV AME H `l4όW;^1 `+>$gm.`*vT$)+kWP)5.=}JX߬=Jz8I=rm b(fI#:R']H`bRzgur)LA'CR!`7Bk:y HY֔\444QcB 8QȤH4KM_EXkS*רmQyky#ڨTthD@ BRFlBEINR_!ڞZWW"oW3"Z6L 6TH єjDDDbx{,k1N"}`"g"B .2YBthTRH_bZ|,A_D+icP%#9o$C\b́zdQ:EfR@zJ@j2s=nQ'%߱f!Z 5@^n"2tNҪ+ۀʳP;#ʊZPdX.Ѥ]Qq*$;QGian)l1=|,,$~.C&TI W˯)W1"&joz4O.*1ʼn~|17wz4*"6uKe6}!?v={G5_nFt0>7ޗ˸(k# <._)7 ^?/uPBs-=_UPU;D/(jӿGnSV./؋͏+e6M(j#s Qfg%=GfMr_ؐ ovQ7_UϫUպwziܫ3~9y9sTHT21C3Y*D,N@!9tJ;zw_u#ez=l9r_"іMP^Q&gIO!GP5˕ N519n]8Y`Ö0嚾9- M ]L^TVؒ&_R*|Ik9b$r1!qo: C{bPTqT H^`V֡]챮ɰ-) O#/U/0O?GbKI3SwqW+٠SblbFaU1}i:`ʾ!%4xg$Ȉq1YQ#B$^ȌW|^ IDG ¾ 00f %Di5JMJS`)獈*dext ASG HvLj-*S~^owJwo[F$9jV)餴/=D?J[惰K*%v:'x% ЉC([5&^g>/C)sr75vm^s{=='X9pH)C1)#n%8Mb@ɫݚ䕋K8P }v0vZBN2oPJ9֗R^.R5(D*-S Ad颁)!Qs[N6ákJz*A/XX8DL!Oh1D fV,369ŽzulANZaB!2ûNȮb9Zy~.rٖW "HL2Ú/TD÷]EE~7QFFRdD7E8,#)8`X/[u;]ҕ٬0SS å>$g ɘL`&,|F80B-wi&YH$.':Lk\0g@CK13k5@ۚFk7N|ȹ1_ $%<%D{Mʜr1 ,JE1\ͱU< w4F!rj;{t>bW]7 t!АH䀐Y@B 1DbeX9]?[rHyt7c4SHX\RZqwY`N<&@\bIo|=uAx/<_l۰Gf91q-8"+A89S"5wځݾ/#zVMAnQ[\Ȭ^ ZCmkgQ?0 hr:}\g Kޟ-\;8د;.&,Bۓ;mnq,>Z5v66=NMףq $Zi,e6eg2:w9I+&sP1L9ՃcБiFnT8XwXv{yD]x^D-{cs ǾiCgkQe# E_]j6[V%XV53gL" YݎiMͦ4ojl(;d*#=2(a7QqiE85.7ua?#J|N';Sǯx%6LB3ccI2&\_"bO1Zks R%fXJ] Ԧsw};JOP@(;eŎ;!E+:og|w=!gO}_Q#?: w@+mꖇg<o5c+:t(0f0ĕ‚5tn/#94m?fQ@3.$Bv kDx0>G׏gaR`GmPP4,8#*n}wO}|!VrpPֺYR 2;uf4<z376URa͞v ߙ<tPm׋;-R]u@a4p4x2^Fw\ć!)xʹz l#;LHka '=6 gP:ϩSQJ .O'ʪj"W{&a/z%c-z(9 (rLmRH{,Qb\Ҥ#Vj;5UvVTq|MPx@D8%8ZS+܊MV5e@C} ?ΟN|ހ2w}7>~i|o'p4912C1Xm)O\8RRS!_H ֡{Щ W{-7Z`.g!| Jbغ%[[vKipfi꯹RtL{Wf;<̓uVYg,7g߻ZwL^cMۏ*^[e6F@\$Y`%H`5W%z%L KT(E`DWp9m MS#KMIpK|<\r֬_=JؼG텩[*ώ}Kq 3 ;J;Ye4Z| =:3m^vZH5#k|]a7$3鍗]l𼛞iU`sQΎ;;/g9g_)7zMS(nHP6aNiŁ>Sb8`S[_*aʎ=]'K?_ŞAZhyto@l{ *f O[][Iea TP;XiI^6C[裢EA,~[招;Fʨ_:5.x] t?uvR,aSa&w ~%=v/rL(Xd—(s瞌2MTo|ndzYb^TEσ?t/ct^K &aT9LJ0u< wr0׌mruw-6TpϷs4 VBLnqhw@0>$ f%ڪ-z7Wq{ VzmMve|Bܴnlxrb?ozyUP!e]_U3M:G3lH RpmcY.ȫ\"iC-_j]\HHo]5u #q2 ;SXkm#WE.0y)a&Hg-vdYVKݺ9X)^c}EO8~ ڨ"1Vz8o|w+ڀ;ioXoWƥy˹h+Ԯl{"c{8FSƫ tx<"x- URI-=lg< E p١hA\Z(!]ek(FlH ߩo؅W! :jr;4Zt-bMwܨJV_ԟ:n*eȉ4Ȃ8%zF=lh>ٰMú;B Ѣ7$\ZdUMUFR[;<5Q\RˎL%v2&f881[ϣKh~מS1&e|'~/zGׁZmqW`[/|.=.@h4s4/56"0U9s˺  :#1f6U9L׭gZ!,K]F2@Fʔ F0D≛Pצr_:4 Nڹ.l7Xw1kJ$A0f"jG $"ric-=zH;&ҮaG*;uP]z:?(?~oXK@oS~@҃l8gR^x&_S( 62_Տ9p/R LAfHY0crUf*'gDb&Rq.S1P3vsмFӮ= q5|yO{%prr+%bsVap٧6RPLB$W..Aw8$:Eh{ c! ;0*bPU ,c-5I!'`#N0 ڔsBoZɴ6l/-It-( !fom .m*kKt e;9wRuQlvFf3%#fE~Y=s)+9MQqԥΚsKJ  U PZ6BLj/N—ѶuJ`q;]8OZU>%07Ӗ$$`kn> *ޢ DE"v2db1Jq+`d҂d^EMzD3Q1)<9-g<PF1'cv)};`:)KC! ^pG\deιvk^ DM9v=:4dBS$ - WqiS$ 3o3K{Q?p\<_7&=Z)rֳ_O=5_s4א3!:)ti![z|ЯI2b<$(~+:X`=)ۃH%6~Q2'(aHb jޒe˽N7$UΧu鞺5؝.)Қ I*,K1!sFE4`)SRXkȱdzz.q`Cݷ]'=B9cav-,ɓ ::@Ll;҈Y@/j[YLu8i,Ce'\}(-K Kd4BP~uaDF۲Vn aƩlI~0/-/BA'lUGZhwl8],f/Ûh~{;l-z-Qr\%fm-q"qsCm{({lD\9)|?G>dd`M-,LE4_peſ{dnAfe#W7gVqc3Y~ZDPՅ)t_^RCX ّb+n =_yWKZ6nnbGsVs/ ;t2xWQ ? X?^ѢF9v׎=u>-s.`:7ۮ3GnRgeR{OUeX&ަ2ٕz_TVUXvJkʺ4 vҳײ݊@nXԼ?q8x;8`o/t[tP_<.:-}{.F v2fLGn\$dxnFB$)J $H$a=ӧr)/.|ݣni0|~`z䟻~V2Ч Eo}Q Iڶ[]͙xLl.* Z-(J Taܛd 1:E% Gl5,l6T{+AqU7Id}2Rj9Wҷҗ+*JqU҈څu,*&PL<(w+4ct@1E4܊;Z,BGngq )Y3NHV#d* A2Je"9:ř W2ў+nIU!, PMJ(哠ΊqYsYfKgZ\OC RR ^"<KƝd67,jeEuoU4r^ЬwT6eTnLt.h:Y"g s\OZa1NZ.|ԙޡRpFJ lPKfg\4b&x~Kz/,m/Y6ZE&>L< L?޻[Mx?>DM;gJ4NY 6(!LA yP,ka=J Rm`fs34m2pL0 i*J^Xuqf4C.VɭZ{xt#a2}tt&!EG[2aRYݶ]]6kDZ 3x\3R)tH}Apv Յϸ4\-&ѥ1Oہd x)'ZJ9c/^Ok:g ;7Y+BD*&xf-qr)T)LKQXIeFܜ.n eד̿3J4ֶГ"pwM4CwDo߅`=Gji 90!L=(RbeSpߙѺr6;EpLfo*u%Vg>1M&'Ǵ\T9ЮΚ[y;l2Y6{kjuG 5]bSTV7~`no$2CicD tH՞&ioe&SJxSr))fSY2x1ZȑQ8+*d z9^a-lUgw:r|)3-`?=0Ulʽ'U+7*}OՍ?tQz./hΦաKx?3!;2Hh)M=,jih> Nwl׫5j~,Eek%wC[RdזVj*Sqc3X-򖿢۳, wTeثP@e~̿ Si6+^Y j=Ui,VQep}Anstw;/Ac0,kl Dxʍ5Ӝ{!qkY9jxr{Wv' mn!~o[=_(LAHi8WڅˍCZdj͇O_?| Kl_8 ?}->~aȭC7[o~cHb Bi!EHB^04H  yp@ZD6==]'abZ (ʎ8L,k #Nxc15# 9|jAh7I(73XcJ rPQ 5ROˆ5a +o,ez$B ."d&*ByT(A4D@B8dJ!0:ڟvxbMYELO洹BwdSkF5:U5j):֨`ZB1VkOaW_ *h_Q ?_LuӑwK LɐIԗ= .Mw&I[X*Lah*@r^`-%6 C+\UYi yKmqc D +K+ ^&YUTb$!N "UHącHPȧpB8$5 L(P'*smZR$spw`c@ib r`%7`7l"lw&iYҐWdZƳHDR2Eߐ{Y:=L|`Mj-2DR 526I ֒Abw[oe O_8)~uRʌ|@6ƑGBZ4ZƯG_7n''7;A[{/v'~T5lBH]0}{p g!ΘUCӗ&A~>H? ))\[iNo}htUZ ",_޵L̹$D)HN;]yQUp6:Qٟ;€uK(y+1 28xDP_EmW|_?a[s3b=6'k^y3C7u_[ˇ݇h *|j9q8s蘉'|58dtBD0Y+qOvw}49jmpI J.vtoW4O]e|wb|:/eT{:U¤i05#X G܍- 0Wo>w{_W |,a^vMF%U#A}մ hg='˖7fizK%s$ Ɠo9>Mź|1oEmKyypz>TFN1FVΕcd}ωyj'rަE9P'ֿ}UKX3OȽVӏ1Õ?6BpyX5psb#C'qfbuT/u.`yY?aJX?÷a`z $#O#}WcszE"Dz"=WcL 8^BځBϗII,%1zIũghP Xy K0 {;eX{D G,/OA5vWY)k ,6+9j-|/I͔z:dߗطU9#ч$R,cV!b @L1b CQS&ЁxDpz i!咶v!&Z*Ҁ"a KHan21)1p,h02N$g =s$ԁ鯧|I@! xϛeY| +0FY)'"g{PF>u0ט=xLPJ8o KX^qt#\zWeby^Ļ #*RA%s(PM0Q9(x5]yc7q d~"2]ktJ[µWy D3m/bC. m.< !H _ _BYuJ|uA;.!N E\."TjahA< x6u Aϙu""'S-g]WHDPOB0?0|BIi34r њk6j $N:U݌>JZNruME0׊"C|][z5Xfk/nh]Dc$*S0WD-ش&FV$JfJ$КW[.b|A>3ӝynB $i:P8!dXsVM"M*mTNȥ5ĵXR,SFR+ 4s_R!Ï_N6?c6xɧ;~xFRQ弯g!l:rZqYRI -) B{FR 3d=GX%/Ujs =Զ%eKv9!-/bd0fiDT}^ӛǎas ĻTIo! pZnC}Nu&;M\}|qeG^̎Ft~4_4}o#x=߃keɣ4U.pr\[mU]JKW57 ^Qd]C̱pq9k+"8..1(K5YsG\1z1AhoyD1,\3hےow{] *\D_L2}vG˯maEU9UuUo;Q [޹./'ُ,7fN?;'=?/{NJNXPXP/`׺0o,G쀦5d9c,!e9(oNx 폜MӞɔX%hJ}9]"JiTtE^|cgUK=灬jU2jWȉvA_REPQ7]~Q\Mc9#OhʊI&̕O˱MC9:Hr)YߞƓM E?PI*q<84t#T;$̒iLs(ܦ (ogW%Z y@{az7}]p`rF9!]b~W@u_ZVsS߮>v@x筈[#Hu_{u%%8Fǃs^y q"/Cymɦ]$@r VX{jb QH:YYdą" yAS|SX}e(6{Rd e%!މyu*s^T Yp/b` 58̙q'6'yHl.à$qqy:{!5V5ϊڦ4@F"MDS $4d `spQE`JijmTxbP̈́s1 FH_nbƥTRp4Q)hG.`?Rq䷐"w&@4,KqCnv(r8or[g򒀄h >)cyf4y#Zt a#іe\ a ]<}B5ZFY ـOC-Gi(@0(,PetY1k {KR + #אtf-QPD#2}&nny- &W5{ֳ 20t0C9N HcJ8 L)KPYcƩX&T2U a][N&tVHjrK.)^bc%AZ˓v O+!r2 10ѽNA/CA5 r'Si0N"Aw2FKG~,|,)69^gR{nNC S ~JlY%GE ʯﱟdu蕧plrrIMpcUe!: d*c:qRG!ȱd7""JðmYءj`qþqr2Y=ߥJPS/L a~MJAgji75S$D}~ $!b-A0vED*QKrd9<,+m94]>|Xh%B6*lƲoƲ-zkSqrc 8/! *rUNH28OK)>A?a_xaOPfud(F9ע =g8)wrLegRl@B84"f+E5C庻6,:To7՚\;)ZdE_[gξCEȱ dO,y2)3; \#.dLh@R{d7۠ZTf[CCxaK4,mNx5$tBL12O2CD\Dmhg@-jﴕͤ}PEAIc(Mc$b,h5}:6dH+ EN䨸%Sd,Yb5HƢtj~ycc<{p6kfw0U6iϲjZT4 F4cv+ޞx+ߝ J%b:M|yKTnY (&lNks*Z[t'Xyd8){CdlܜyUZ}M"q5d<=AC[ixrpMǝ>=?kc6J9Ƴk]ru5.VƥRU+G-VG: -n1,(0* 6}zݥɌv8{⻋1 CVC{2螼-$$WKSpHn1߰Y"CV VJH&7Ws2djΊ\o9+'cJy{{Ikƞ>lXI%M DE`wu@IyM;ƵONlp=cE{gT2pѶ!co@Ie\ms$t.˹HVQN[/DT/9I3$彤g\\HB9Oe&͡\l]"pa>[fq.$l=yuz{D=MZ2QdC nTvu3ֵ*7nı PV9 ye 9/Jީst92:"X]u,6})2;uR/YAɜ"Ak[I%g !#BY Mcd&" snD. bv 58lu^q?I!Rm2ˇ#GI ۪kZ,686gcv{@RÄk)R?<:$ouef,<j<;]/s/Z",@ԺVi݌BDs&z-"qp'01#OcŪ? z5>DHO$yٻ]!0Ք^<%.E U1K6?u4eh.b8(NF*I&J%o{%-ymd7Ta6Y.VTPiHB{T}$QHYHVsyUxm7 K 0iŻ}S&+;cXC\bϱMzg_^pkM1Aa1##ƅf:2ԾnՍz!yD=\D-T?&m;N'g4_J _Xun񛷕>`̗:W2Zr얌޻_da/<51"Ċmߝ8~d%VZEƦ&FQb#J&(6VF !AfNZuTDou"PKR cEccpZվZ>fd5[68 -/gMkܟQ^FbHƵ?OP,pP4&"2JgI]p|L9XczS@+uSRqTiGQ,9r<=)T}q{Z:ycYOk$H]C{{INShkoYDʿ 3eOOk#}~o{Y uqCF:q.VsVD^dvWKq_Uv5ȷh0=Z,U4}Vg.o!ÖcO\+TOLx56s,ޓVFLd_E.H>Z{n ^Zp<=@h,eR=E;@*Z8V<{ #q# .Vc XnB_ 8Poۗ7e|A}}۪0wP)abOtvYU>rvᾓ7;ϪFG=0Vn[s}K!NIm=foO"ߧ%\'HMm;zbcxh.,9îe#s1-#Q}95DD}PH)[ ,^-=p8W-90nan|hR6LtL8O=n{<2f| qD2\6?gGNq<:f+ZIJɹ-Y'rL7'RQF;8wI;*W}2>MX?>cOy9?M^jT)e6Uop~HiI>Dc>a%ngJa|G+y<׷4ufcxpdm.*ʟ} (ϒ_ImL y Agg.)G7J >UFőO`2<ř1I/Ofi}=M'r!Aܝ~Ȫ cI$>U;|tm?Zz+6B $1 9]K3:pnI<>TS#-eVHWxqK*a-Sh<.ck(6ev1(Kܱl6"K!W i]\U!9pT D("\+SkH$.݌[x$crVCs 0\h7:~¨@#j7GK@-t">?.ƘhWe:Q4FkDϾ9 f^?M;km$70$08MF&ٶƲ,K [R-llU"U*r+ Ju|[t-0aeXO maG_V2NGCpҭ \AM FAV1COv"R2wԘnp颐.=ڕ\m4FRx K,ch+2_AwRܨ7$'t K;'C-m}76Ϩk[-" zZ/C6߃9T1cD;:B.bSCz-Zcd8Vy ɏݘ# _0uG @ñBM'#~goq֎*՞݅*mŭRߺJib-*̱kK|û(9AT?.7ewkB L n v__Q>Ás:pzr7 amzn2}e# LFYQ_/g"TPf0yA.%,\[?ܧo}ɍG1C_,nz~uoa8~ƚWx5 DW^pRUBPZZ}w)W}?b09[!LŔ Cjp{m7^7("DbfZr+ު{W _~0I!nqw&3y EHu-iǓSo&>s`?w_ŏm|| 8ydǣЊثAk pD˜s` hNmg2B`ӂ糶JȼHK\ݻ85BqҾYh<njBXN!9e 5*ɕ2LJh@:Jgǔo]6c4M=ĄxrӲOM3!<= LB6X|+A>`~s#Y_n<mx~ 4(ӈtdm&tCʼwӳ DxROy /AB8N %bVuFheodx`n,ro5,C54^^wttD·%b/쏉$D 2&j1>zpc X/Uig9Dzs-TKY+SHM\N `u9m9M8,\Ǒl)|Vr,d:}ve㷟:#G-H*LFmCӮ~z8 99 #HˀWLJsd8Du%GL(!9YYUc`fJʿZ?Lw'D]);H)A!skiAd0\t:$diE=奶Hn4 :<)4-8CCp(i'1q) =Id_"AiMzyøR[ap:0.1,=vʪdaXQҡJ6TQr7^4`Wg~{o _@i~w6.pV]1_%͏ ì>O_><|kppv2CZSk#N3ﰴRp>1:f"9shf g(0fKϓ lb0r{a30A^*/d@}6Dq0 \2wᘶnK7ePDr)..L.COyW޻=9|^&aZտHrߺdݛ|l{T~ۘcUNJ?CNJ?9)e5|fę43YaRʜa@es㥳V;jj8PN%FdH=GPa5;IPOҰm*!5/Qd2;b%`8`p"3A: l8V%DBi^b"(QiɰjvͻZFĴ6 W*f{l+)aSæ7Euy@kQ. $`Vg4g[wS %YTo{%(Гʆo?lXqyvNa9HxD#ƽˈ`b| KŜib%^ISJCNg*)#2_dȱ*`,)"9@H3TF,H%b O2iZRlEHMqxHGb1X8p/$ʃkEN-i=+^tpF}+$vEw(3 WN4:Rö69A.$p/=2v>.c:_vt΅RZIF x\BCPk`3n\uNNB^<0^XM()`87 uZݸ>o09:GWӼ!#\ 9HS`a׼=9Vk9Dpqe 3biOG5RUZ=q\d:_L"z~kEJZ﵎nT5G_,zc "cqnםpؐ 9-/)oTBI*TWszM&'9;iomc )J6QW&/KӃPԘf#F gRv1MmdG,dn%sVe75N4WIkҡHl 4dABߎgߐL*kvijQ_ 6lU甀(4"+hx)RFB^#ЛǣTGjn7 NbK;C +,mU7~t6H.42,w6^\e;Nld?W=uG<ԡё n^tyk@i)XQit^Jm%<V1LKN*UՃ" xj:3E˓>n҃hC)O4J.ILtVk>/*] ZIQD -ţ>^19%Kڥ1WSvU1/ڥyD~e1Z54SږGTPu~)\J[;=b\~Nr_|^n>=ōS#(bw3)n0CoQgsy}>j^\P2(!9+dP߸6KhZٜ 84/A27D;({ygcKm1~wsN̈P! 2&ҡLš"1[mmմI1݌E9ΎS Z XҠˁkPM1íKхMora0"rifmni54>7W?EI-khǾ rA2;k AB qo>YE~rԠ'ls s>!F Xr2v̺OC' ʯG]8 VEM ߋM٨d@V}Qw{7&O?xuSC]h!L$);g.++oJxvؠC \54 4cN l8ZBk(8۹PNlT7ɼAHq<|uԼ}_fLdCpzi 57N~4G?xW aav)4Ы.t BE?f: =! g,a@3~wßãu% }]$g/*oͪmp~aϗl!Гe E'>c|E/{n{vM}ފt޿7۲6⽻ӽuł^$+7+}_:^t JOOI-'fePJJW"$vT]::Uݣ~dIZ"5Q#2dݸPNs#M?ϽS<Z3βr&D8G3' Ϥ"K* ,3"ya7JjܔJF{!|Yl@PwtMnla~b BnU1͞z0HvyW529,$ݼBo.uHth6s#L$TKL9)2O<|Fg$]UnMngpUQU/Ӗ7FնŹm(ND%Lh[Ȃۺ4:cQv{llxnb'̈́+jp{b.D~Jicc٭ap&Y3;)n)CFZMz8 ½2\FF9 @ RÄy<^X +wș; tZs\j!֖ӱPCC::K>k. +l=$sML n0!8C_zֱjrG?bDhtN_u$=bCQ5=0=QLJs%&pέT.W~sS>*'7l!&.''}p T-e? Ezlۢm+sBeoPmlگ,5UG̿ÉCEU*"OACfOB zNG_*)Y|,6_ bOz_)Tb!qH^^XӄXHޜsiݮwlj^~4M|QޏOӛq8` +$MMD12=%tiEkNBFنxޚf"C]YSS%9dA1U]Uy$1؇3gdž_vU{caا1(LNR7NR0&&e,X(V*VEfBmE?DZ >lZ-&3- %L`nSpijo>5[ urP,ҟxZ^M@Y)M[ԊOmӼ\J&ƐH2$3;5V fRJOK){jeJmVip-#Xq} l` V(G%M,pFv0̓Ut1'|%t^t]4a#ops1w[K׺aƴ=}&M36伽^T:Uyq;+ w8\r`tJPŤ;Jg5]A%Όp6yK8g(݃j0rsRQS9;6IWHNUmTİD~)Hk׻!ԃҜ!D3&V8 ;kE:BM X 2"!ldq]U&ogAm@22#wWSt? Ð7*{ۭuBU]莣st@kaȆ1uмrk^:3Xqt =riLlkF*FA VPt/ZVwJܕ4@Qc)2yn:o3)Ⱥ.IO*tTgo0W\Z;-Z +s~2҄|{l.jNౡV/pU-Bqۢ{cTL'7LZE+]d>ȣz[qBP1YR(L" \4),N?yl˷To#*akc|Pu)t>WT =}|ΖbmfaٛL1;U!O9 Q}UJzjb̯B_>Pˌd=UNYn9^{QK~Hl4b1` e\Ɏ r9a(rfI{au`1A 5Bǻ\f9mF,* }95 ar+3WJN9v5F8ނ wHTƘԈ+Wj8zYBt bzM,-jZ.-~ӌH'WA^xjOgm\jq!++m9=榵܈39841V 0mkXnfDl}jk =OJ^[TP/CM8clm[o]eOثx"dQ ֒ţa/2+zEX?kiÛ{ro};naoCc>pC.-*/]W0+[,Zd=d|$VQL90ìg\?lK 7M;]`͔6-mldKC4h2pF¯0C-nv(D jܷm/?{0`l =n>j#&fy(ӸBGόdmr砊p+$;2,d{Li-K mNTċh>hցC);i66<3&N(ܿ&$*yG3:vYƓh{<:[X*ޔ /^7wFk4$!5&_T踾XϞ" cݭpK'4 yqfE4bWVDNl\,;:Al<(Q:z/`mި۫ fdk!(a0+Eń1NZSB)W@Q[|T4Oo"Nj-}A i"Pֻfc]ax^Ơ]M~ 9$+tдŃz>? }tqņ/amk&2d0,ZT+S :(T[;9N^)W-wi 3vQB$ucuH]f)R7oŠ3$(u;X~X+i,DFAxQ=u𕴵irqю̺sM:tK;xx[Uu^iU]z^ 'EK w刜[M~4xq|GŞo7C=Iu4@mQ/eAE]eؚ-Pk*[xl\s2 edvQ3=Pb,hzK([vdE{$1&*ph*ݕOdg#,Y"-\aT88"̥p0sٯulLy8i(|u!'o*UCK{Q{ʧ:Hm[ߗC]/8 GIO mwXfp=_2cgf},jE |$#DGL1c+/e]9TWl~*t dB6+jxf*V96ˢ'ee Fޣ @ +3y<"T676DP9r4\ 3 /[x3ga%?mcktao"F Fg=d['iOu[ޚay#6\'ץ`[u!6*Wڳq1=Z8aP/u_y{ǽ7iZۢ{tW9''qeRخus)g }!kRJBY)dmX[Rl_{`- SP*%H/=Vl-sB?*p`(UiZl~~t>}T+QYPjkqier_V8*V69$N.\󉠖i@UuV{":bU92TSYUnjJˮ@A5 ׁ*M=^uEb&Cv4a.j `\"[˥ ~ !#xOӏ?.fwf٧`k?hy~F@;"mN;X/`76+Z-m>`5 S`XؗD9jn,f݃iJ0pWBuKYK<(QtR:@ʈl$3=6BuFUYrxÃ0 }=ZVpM[Iڒٲ.ݽi7|szz ^/*QAN2ִgO.1kT|RNf/N+aZ֦]5մèN+ȵN`2xa&nrƐ4U0ƏBPD ø AC/iO]B_f3K?`4{,X@oz"5o?ME{6$ /w \[.psEa?.~ZhAJ% I4p$[a IMMWu_ UYjJ+ǔ PY:{u2=7ڄih]=HU OBE(÷܍Iw[6%^9,U{Jm˺W/\ 9@eajZA$:#ߨm#yp񮘬 %v#\(eF5 YغmTݐv$m&~ r]EY m:ӓfo*3m xwlL# _ hF: w A/NAGDT٭͢~JW/d`9؛2Bȭ'v==#.D;[aI4$v[)6!DmCO}.2r2y5Y%Yh1;h޽Р-=»vMOE;qq7_Tfy=erZmJdvGPjZg dF+x<`ˉ9M$/sW޻0&h4'YzP aURqjYu|zz6_?_MO//b~_+k瘾IɓYt[vwg$^M??zE+yĂK:̚KQPև/ʥWDOLBaӥC1p%ucm-B)ۧ 6udnwއr`]?P,FsO/ZOg!`6&I1tiq#LZi$eO@| "C,b ;E>b wZ,єG?6KeX^B.y$R@(O *J0WIԶ;G}z_7uTfǨW]NGj$\ҵN'r"$we&d!Ao mdw!iI_9*4.{~Zu! Zx ]#v`&WН RXl:` HOZz.4C[.>y`RB!sƱYÚ~E@d@.:WE  F%V"7[XNW0Y퀼2X]<^jчj ZK{&ȰVMDg9gbl%qN[vQwQotlȝ/G@5#O sNrk crdw{{RdK8&iŤr\6iK?F''|/ 9rrU@bp!Y${T\dY'cZw9$x39WB1%1\T}:d.kyjP4!)=q9^mJ`ۭdګytz9;˿fgӃ}Iawx`VGُgGGvflRw?TG")_e}ۨLGU:?ѼP^}4I f ӮO+c&{/ET@#NƓp!D¤(a*RVLΈjs,_)`}r0H@bl=H-A:oAOhE$FVo6C(uJv]t)F*I_(o$q@, T"Xkp^ "i[C}tƭ'$OrVA3Jz=!G d_5 XآG%zCf2Pէ/'+Q9@gexdͶ,Y)@הAv^M :aB`_%sF;10k*S1 H$'2;eM]~Kk$m x;LϵCEuQ)EVkZ]0ܷQqܢ^n1~*|c3j4ᤨ57ϱtB+\$E~IsLhhCH6["N:ou A7Vd&Js&dSJ %j ]BZ>**<$?0ۅ8ON^npnB0!QLWxwUx3m=D&e1 3봾L)Er1IK Ly SL z t8Mr}(=]:ng>XKAl셝/stLL$pՎ 2N(l HSD@%TIqJӗcȄmVpL1ҤdZIqZCQ}u7R(Jo=NFv-6ʤ1^˼Շّ5!<'q6)M/\OG/6A*iXw{ېIJ9o+KEsULR*! %r"<2aU%!xiyH6AL6W[Gz":n]z}8^T"2Jk @ R>A!KDV I"e o|A۔scz14$ce)A]شXZs7ǜUb9F*1$XPB#Vff<")ɡyG<o0$oq) zX;h[FZ4՗/_R')4=%YSxQC]4:ܡ9y*,8l!QI u+W;M20C` .L1'k2K_ϿXYwGb[`፱*;d3s>I բ[la1כv`ō^EwZ?Ww#B!7"iZFRK]nDnDCM/4vDEKWW[F׆ CrNvE G9]J ,8.pGk7>T$:=%E:3:J-tŧ RJˏi9]WEQHZ`fogBK;< 8:d1&=XOg+AB"2[R^찀qW09fӯ83?^WuҔ.|[4 = Yj? o''_pI„R{,ym DlF2Y ߻Uxk{zA׆9uy‘L'u!x=q8ɳt-T%CȖ.0O)䶺bVO4(a} w'D= _6mɳ6N4hYX, Ci\Dq盏?7ρKb frkӄ٣&R羟$a^a^6W;IRPΑT ؚX˃x"s^fF9`a87SR:D0a-6$A)X [>>&1zz vɚq3Ѓ tv@="nW_8h}̮p%1 /M'd/0?P}&MY9v6Yn,7:u1% a"dEx` ;mzn{6xh`PmEN[ƿHZVO8r!/i~"&{9ɯCC;l0Vq{+;?2,o]Çf{cHv`$B2TZY[,zdzAM!x#Xm 2NJѠ:4ރq!f`cA5\QY5w>E'PUԙWG,Z y?]^/U T5S]za{aijs =f5g?{ A9==#?_tO$û jCj?N#Be:Jo&THpLMkD&Q80tP5)rl5s gT=?RZfhwi#w̜;R֓s 67n+D/zCuyD iyHo.?>ܟ{q/>֮y7%.o:cN o>o7~~szuTߛP3+\[S>ϯ߉Ξ'ul*|ql˓3[:IX\yz_Sw[NEfCR^a=%^bd78=FۧX{ǭj'3{=O:_ޯ(2rQyA(zܽ2oë;?_;<>tYWtZjAB|[:^QW\έ@JI)`"af5 lSOZ]5VL.:"zѫ# {yt ߪ|p[CL5֗.`bJU}tiMuQG|VpڲtBu[C_D@A]ZH/["K Pt`JEO/>'h[ق0ǏP'R r@׊T,XsΊ9|ɼ 29s#LLLqyW+NpoǢc^hKY bSondB|+֤wXWKYsXZ,?UrbN¨!(s5rPGA /ڠS_eb5B,< WbL;4njB`Oɶfa] 3ڵֶKs4u&8*œ\mImr:Vp0Ո&$kC%itO)vlKw1QCzn 2n7}Rs#Cp-NG[ EUUr-Ek] 0ƨ!`U ?sY$4,m. n)k1ҁaToKqHA|Uk WG_ղuʶIA.F*EHJ5%EaRikEaJȝ;תjBWbU)*SV9b3ڀu3uTE\ؚ .Kuh~d'ƌL$ûZn여rg.҇mE:2,R?EdLp*6~&fH;ST /g-#{R ']p;U{[NыX.0L뉬g_-,ϖЂrz9d6bN7FR!,kFɐ}cϖ,;; 7rJQԡoOoY`G5v VYI \&/ (df ,E 5a&"4Wq۬01vmHϦ(%DwrǷfw:vGJΰ-b9iLJ|$`7!f&<EhdT}U|N8)=d_=vCٶ/wAkR4mMk//ZKt7׀h鬖WV@+5FI@z*v ܟ:]X _qG˲u9~kdom]ilB*9 5Cb_Œ&S6*vb˒RdߤLuPbinm6%DLb* $@HZ )D -(Q27R[26˿>?e4 125)$ȯE:;X渽iI^.Dծ|V^01B&>7ժOk?ƫRQ??FY&Xrwχ9l_eqy"3G b=l* Shϧbxw9rSiP*́`.uoemsEn`E؟+՟+;s/+4:O+uxِ&O¤JgpC\4n{vy5H1+szݑ(oǕ"@J ~SC+$ ӈL"^SK>e:p%{p&Tߨwc2HO(&GJFΉm<)19&fv{4inR''nRK 2u߷Z.ϱ:繟fӏ'"O?vRv'eH$"_8E*Dm A:蓐f %G#,&1VŧV89S R TdSFSտA}샱:>N/EA韖]g桿xu:~ގQ-K!2kNPx{Ñ"Rg~Ѽycn̒ _WL΁c^g#H *AKZ䧶4-}AO|3A~Fbl(r~8: >Xz_aNq*p~TrS09GK atx9q[ i:{Q -scal4o^JvHiϜ]),sur2Kև5db)>}uus4yu-`qVd"°nᲓ n!-ѷ,7E+~_Uū9یooӷ7} ԞeGM$c,wqyz5-G yYg'/_vt3>R Z^EܽywaN-P2㦹|4YgͲPs'6¤D׮3p4kSV٧]ƽك{g+|#{g2"o8kAA;5] :\{8BĈN+aT mxR5"P,/u*Ae+9e]O1y#Z b]- mRZ8UM&VSiFR6)辏G"CgG+Έ2Ôk5R\֡ iaJaS֕މ'usڎ* %j +кj, q|֤iʜR lJ#;6f`X-&BȣJYEl"I05YZduĨcڎnMXnͭw.g-B9yA:H^ u1k 3: FTG}NUl?xT`-v%Kqp{vTVm(6ҝeWˤsT't O"s5!b@È2ڀ&r>[[nG|_LA7c03|E$JIϻ;k]edll 3x8u!uxIW;8)ŔjQ2}3H>68K#Jƍ8f~]55kc;Z' ;D{ 902[ }P1d&9MXY4[Bb[=T4[۫r{b[ly@'cF*24^)Jd.zWB\e$TP ^FB$f bDb|/#%&u6>c@2(B;hJ.uJːpVJs$_%; X@xuz\w 0ͻq0n9?w$z# zJJD[IޯA {Dpfers#̍K1o:󷰿ðL]g2丙 W(ef |לϢBM&S*ܒ.E].IZ,k!zQ?C0`;pČ8jUL8ZiL'OD汭KmD?JR#e%wa[(+ {EO ֯Mw?`hv*+xuЕ͛Nnv>n<WXlb܎}q'[I/]qfZ{\%Pʙz#rOvvֹ1r;Wy@SyG z;"^[DI{Ko`==@͚gE<9aDNFO7M$gJ=ݭI9hқuטgom:Ŷ!^RW5J&5crxCvlUMEJp?Y2D/ )ሒ؞1 Ag'mxNS0R1fd@3|by {^ZAUa\`5L>'Snm5Fzvy;URxv@_oRR7ɦcRhx(DžKzi|#ߖ'4ҽKG6Hy\m陉vQuiq WIéZ mYI/- ǮtI1Ӛ\a dRN6s_fvZ1=6iÒiOLat2L@JsqQT;XHlŨ ~;6\sm+%|κ/T¾g<Φި6`e'P!j2/K*n-+ ɊǬudu۪#@pd;-+[ˊAM|nzHxG#xq/I0 DeNO;Q=D!N%?BY:R l0W,1G3){ƶ&A.~҈R4We-Eo1UI5xPxځcX+**}RjfF4f> ǘKkZ_:gH>B+EO{AL>z,tAJ3PwX4*8K@bGjFtaVAwg6v lkPs MT:x*i(t~jTt{ܯKS34L1xyW 2ڏŞ H[VV;O#|K2䄞PQrvȁs,a5 L␦Yf+9%2$(!yԢL)2B)D>u4rpm,{Ƙ(E7Xf!Wiƿ@%OC ]ў1squxu_>[^oϾ/ do߽~ݫM7]nUܙ~tܺG@BӧF?+x}Dۓ-EfPYX*,86 /:L^4xG C1naV 5YPDݽ )KjU5q+ If{eF=F ͦVϮhm? *6seLg7TE5hH PmwdgL;gݛ׋~wsy<ćd83 *i8J%o# pRKolbJC _VtHv.BJF}vlƃȜsavelJjCGWLy;R]%9G}sN^Zw>}dPqa;4>} ~G}#rV rg=Pϯ/6"fO9B1X-4K\n.[bRG:( 8OEJ($`ZbveߵCuf46#e=d;þ*H&}IZnNq*y*ઉԍУKgsH%jp:G-kنs]3>(l ޯ.~3t60d:I)6"{;ǣ⍁ w%A\!=4 |"ŶT`$˻c;;-W '8(BVeU6}kk$u["ѕ#fy"GD-hپo+v|J-A;0IeA 6}yav_됝2sR!邙~Vfmnv-7hC;Ƙ[vO,-lw6YL'D-T@U\@Vh1Cp>q "7BV3PN|V פURzF[O2}{RWf:o^u`h ?TO5]4;zP͚g?#`ɒ_'I޿&[VdEŞRdݶ>Fcb޼uNj!5Gc:Vw,V@4A㓪u䋋_~vh՟ScY|"{ig3 }fhSUVK `*B"y-@:H ;I`Θf>:[ [o E7hҚg͚gBe]L^&o$pMn6~O;$%o}X|tn݂)%5Pz]K5.}ZQ5Giy`!í4zQ0CZ2ms(.QZrƝQjƸ|i.^ӷ9 @v͂gCٳhZR@ևV TtExϟ#NzpN4zݬy6&#_@4O֠I.\'[8…Gs?0T kSCp{l񥔇&ctT}u:NFVR4l4V׵\{O)vp2.Yd_8<1EO̗dvp#;I{JvXWljMp#1QojW1nd ^QlY%4Zn\-1m+..9E.s.|fdKچؚ.d@9rv@HH!ZӨ9MlebDfR^"UuKDZSH Na'FhRՂ!' urq{%Ϯ"wm=~[}wPmn7>@[~cxGoM#kwvbjþwhHsS!e]̓"7(|\E} Gq+ons_J|R\VYTaV`lN6"ҒtC)UŔ"T5Ȏ dQy473zBQ>q&g=EuiQ녅)(#dB:םBL¢M!D1~5}2S4X,H1"ب[Tu m Qv:~+I(l@ɫ'*%8OmLɀ@Trec29*.wqL*I/ -ugɑ>%rK4[ÝWfOEEtI96Fe 2P$-[T.QyaH|]Y(7n<ϲP2[w|ޒ#}"ҐS↠\)"<]1b~oi0Y].cFpG L ]L 0V-q~\krL01 dkοk5^:տ6ì ;_}@kmMue7~_魫; үW:E(&/0^ˬ]ewI⵺"I1!{ y%ޘ?,fZ ߮WxY7p\<'C#?fH^& pI0?H..xVa^ _uޏV0vs9߀ogOz`~I鑓7&V8}ʻ.Tʜɫ"{Z7>e4W%m R1Tw2qq1tE2,['S8Ql@~#Ű6^Q2~R"2_cCAkP1Yj0ZڢjIhmJf'[ɖ. ѵڜ/[+ea]"2E˵Xvl(\Bk6Bп_Us)q)T2i\TdgeN6Qk}mɚ@hV\e6h .['3:_5ĭKdT?\:ШcxձQ~۳ Tk7 XT2`j/se' 4C=[qJKj+P߉-@dDsD)9,jDM}{_:(e X&>"gT55IBӶTٸ ,5 5}֗T:l{8KOx)QPb LmNN~HU t҃H 5 S7$8_rHH4k:lKDb,ck)&$q F@>t42uKT[c,qM$-4tDe1{ۮBaBKAbS(|6*or"Ʀc6B'Nd.wX^1Y-MŢGOΙ Eapg=MKǮh?L%*?~*0kgkL.e' W} l-\B !/`BYRL*%о&dt8jȗo /\1[JM PY_% vbxYޝ#૷Y&M2Ŧֺ Ca(DSMt2Fl 3Yȣ* Ͳs4XQ;Ֆ!ٞCFdg܈?2湊Z(H azR#CXB&m{"`H/v(8(appgjY jW{o4UVێEzh` o$^6z25#xdLܔ 6*%B^ > N4JeOJσ)0lo]6z:DID;%$;+WETL/xsp^}l򅛇ԁmؤgm.bh!IGjZgˇCK٣7;< YĈ6Ϻm| -<τ~,rW2Dgy%7_zv!!iIW}YS.{Uzi/翾sJ ~~O//ξ-Ki(J.^/rB.s'W7pgEG@kg3@lLuA;sYfNptPMsAxb6n3o}Iß4j6Ʀ *o5mDF$?Yr+ֲ6u1gY{6+\{H⶛Npf(-UE!%ȘIIX-,Q999t;4%M<.=` ` 8E% k394H| e Sy5p"Rո-"Lɢ5lj1Ƭ᣻-\(sX:}^j%~䠻=%2J1SrS6:)7},G.)kdi[:a,4S,tC# cdSCK]58L*Ļ*vU;Xb!xpgC ʻ 17oy)yLCߟ{ғ>=jE(a0G:V$uj>$kPSQ(y%rb&%&rGra6R2D6Yv#aU;DWD@q )B=0x`JxH ÑeD+#XgKA?h{JPe9NJ{ *Ddm$'a+rjǼ5GDXİ!aν1!N$]ʜb9%K;K,H+l26,Q#ceEXS"q"42Hs %\G0M3}B?f޹M;bwQ ,K߽y/+AM{Ob0؁0_"M#OU/F'r1Lhxx{5F|h+  . xPpx1,Y.%pcI,i`R rDOTdpXD"K@b$[bX"fb{&cpc ͩOp`!8T`MyG{3wS;Ff8$Y䦑V}EX%1/ X52=-W 73+3MK L<22T B *׉IlLJGkJ6A&.G#JRrMQa Jwʘ%EIu2vFPEdE覅7K4 SRT(]~de%{N!_ .JE҄!څ$t)хW3 d3Ea ]d ,o!.urm:zqN]'mp&, uzs%՛%g}@Mg 7<秃p}/Or?b&A|:7mNa)֓ Іsc db :DK*>r`i!$EXJEY9X1׊IF;< L4E Υi'b$2O`ƺkәv}YřN $~ʌţ"poz!<3S3P֐Qf;`b:@ls G]#=f ܖmR$c]vg7aɷ4̌U~}J>y kV)JޱaUɰjo$ PPNƪ:ʫTE%X9W{&ʩP<˳|RGO]q5^,z|Q-U ~ZXx}C^Nۓљ~T4ܺZZC<, eOawU حƵ!쓭w˲Z̲]IV3e!Ju-z QLPAmztEaBn[oI>Re?+eeCYnz w(6$=f{;gsW 4p>KK jd46*zNjutp)H;Rc Z{plm{!!URqmRQ th:lC?TmiHU)VZyh4n`M\Sx"Vh./un_W;EɎƱԍW*H~mT3#fZ@iYTi 'f}4eufMѮOճY'&O=d4A셫׉ew Bodm<*\"A}R.ZC[<5YUh8rv"r.;& >Cl(!be搰|rB(&8`I0+r,UTh%tI6C~g?r0٪ ՘ ΐx$$:Ap{Y?+o\.~ӧZc2@AeBgav|{9oKG}33 >Yְ:TjL^^eO7,~j5a) Muȓ+y $ 4ӆrV@%iR |ǚ0$WhDQOD0p$E>2 sQDod/"FqG6BM"w}s5\ͨwo{zu~50EIA]^~[YۢٿHkUe+/N1J7!>>鿺g~6aqt)8hK55\fdB[~rvj XfeoRI+|h2Ԯ̼ /83zlZXi!LB=f5[* a-1z5ENfiCsehUwrbIK=|`SÀs샇ˡ|2e\@O42 *Ka"8B &DJ,#Zc g DP F$O:4.ׇ&'&_p ĩRĆN%I׀>FuiJ՘bF K2-$3yzhVصȨOԱ)@qf1q -21O<(F}l)q(3ZL 76 XbZۻsgZXl*Ⱳ")v\8 ͥ~ LAes*XVOS%yJ?ZGIOP1U"{RzGeP3,nt6҂z H3rkysFVt~8t_iL?+R+K$J$䘪q09`|MRȐWt,z9 !`rr LcUH|v{|TCoZKC7xJ? 櫰ip,*)MTˇ]pزHD*ͲKZ[]FU\;D#&<eܳ_$ev#gZh6o ۀ}86ۀߤŏo a_99D.$Xey/Gr]477Ks6'LJa垁IbEq̬N S(ap$X+zz5f PYreLW;~>n9 |:W,gb|,\wQ,.0g_ٕY;H/ r 6Lg)I'y~˸v{w0*H 9[(o?M7LL"s;EњI-"x?O໹5zk'~|;;w'!4=\Bk?;(⑻;w!77. l? hf:R?;XFSck/dh g{ !n,9 {<!IsoFߧeY(,8`]EkHc礓|C?)" 7垭qFJTIԂ V;݆E[GѫKۚ؅761;9j{Jn tM{4W֕HTA.Uy\;º^n` קpH&3 A1"9,nHD&r?W@E(s*_S1%((C<ތ&(ZbIu&}7:`K;a=B(&\GL'2#eO8R_5Pa5] ӷ)B.h>E?_mpPP%x\5MTWHe{քa#jya$z?gyW?B+Ky;- 8şQ|;P/"?Z^2k]{|;"%qbcBLBlJ ՚X戦ĆsJKNEF_i Jg^Y}~TmuaZс9\*ҸH@ v8Bh@>XZ@(ݙ;FbK)4_=^(A_'6" Ɇܰ'9O8WNZHPYd +ؕ^.2㲒9?v͵j{W݀eoG^>4փC2-v G*2'Ѭf>3Pa\Gr?uRxy1ƕhLeT5&N9N sDAHƎZi˕{?_m-g\'aݡKqZF'TD# IJdoѦvxw3y@5}d_뎌Nb- eĈG6#M;^tSNjnxM/gzÊڨ%4.JJF(UNcI5QѳoM`CsGHK~< gnrLwcݺcNΪM1X`у'сP \(Na%i™2dzӉzg:գːgR3WrP,Y)Yd:528 0bKB!V1\ 0,dZ:&D[S3WYD,OT귛~귛~:y+0hĒII}H7 YdM)-HX~xDȻ[`zÍ}mBO֪g=l}Fh KfO.i63c׹c~o456*Q3P%`Pm}㇫A2`ם c<0Ơ?@/+:x^+:$> qPcC0Y"mnl'\7Ows6oasV\:2p72]Dx};C̾]ە .g+R֒.3T/]vԑsͼJS`g<¾1&T;t 0z~тe֗}od,Pka@w{PF6V-̬=4؂fdgг%Jʸq ҥbx'VW^j9-l 8=-YYRy#eŜc? e~ nV%i'c Noț8FwC9h|)tp{u:W10Faa"K>hzQw=qNFK&ҌY+*q؃~9֮T[,%uV2N1f^>@p!97kc"x{sc|RRˁl"/s.FD0yEDCɔ, A2쉊%Q)Kf{2v#//}|F`6")4nCJ賄> ||kԅHӌ49n/Xh`V7<$b%RRax7aRV \96+Xn6 F-[: r':X?2Sx ҄2xevuM[Ա!t߅ Lùvs;F{I `/#m[d5e(!bCУ%s-gεv;@gp}٘y4gؚߞUN[=v:{jO%sϗ4aᚇmjOC fgΔy 0}q+#4)Kwn1 /KK@.0Ź IbM!/΍^ ޏW] Cy *ܴl$L L@6Um{z2<<PĴOUC$jP\20.I ,hђ}eof0Q_>4}t<[YбfT8qGAKF|lVi >lɈjoAF+08 D[Etѧ|Ŕ҃"e)N!G!V܊Z1WBi9l ox<YR#b2iF[c$bXnfnyоУQ'[gsf>3෎B&Q#AR<lƖyn-"-TM†4?Ğpr֛7Ō:}zZX€9{5S\c;eBٱ%!~@gu;\fn9n=2m8VRW4EaZ:YL7tSL7tq0`iKK6(9leTiʥXp:-$KӉh:.WT[nfyfҫi \=)}?Bf#~cZAPFR d8]e=/MjG}?'%Fլ% |Aԕ_"gNʉ @n?~"TG `R- ζ,/+' u#|*~G JH+L,_OB.35 46!_fӯmNe˙7₊u&Æ>hq: vHMhiMv`߂&F;}pƆ"8T)bYM&Ik_Ys.=zĦ] f]=y `T. xDr$*W$73*, 0"`矈"oTG8y׿ܟ}٨Aar^} ݏjEKNXk+3 l|+b| vT*bF16#1ymIm|)tz,x.-J8MK~ÅJ8%DVG1RZ&%kKit{/K`]`~6& p?~ujO ,9B4kŦ9]W01PZQu8c0 Ն )$1=dnzԈ$fP9=gNA8c9 j樵#)0C'>(p` S'g M)% dp༜{|d`Cj홲] .r ؙEfĄuVZVG=Cå!s7s#dfyy0ѫ_NIB95%s2+?UHRC,E#&̧資U9 Zs7݁/x*&`*iՌ,y@X*֠iD'!Q̤)+Ib{G$*%"mߝK\:;<,.'DZXpX$(j]gH8x-&n ؎ Y^Al Aa{lJ@^ co\4O<6!U̓l$Ym)zhiSzYgcΌ8ޗcl ͏'I%@_m%!4puZtO׼D4Ɔrsܐ*_vq$lb+V V8-񝠯5)jfXG98\iٲ,39p /}~eI02[+,}Ⱦi(e;]>^URa5E2JC1%9K'q!fF7;6 ?7KyyW=մ72 1JM)+W,6C8 1+` i*2 IZiƬB۾+}ycVJfq۰kr\N;.'ZqF1Eg"C+1u̡Zڡ:rTȡD* h9O`d,>rȰv9 78 *\[!^f&SgV&<7-GApSǎg|StaG"`1qJ6 Ɋ`|EK)V LB ;EB%zFcnɢ޹L:)GʜӈAȦj je$0E8p7(tgT:6) }V\P;,b@~XlaJ$V+ۙ(R#LX-Jd C Qx] +/%U+%UY:))Z CA LT6O5;#;0 `N38DŽ#Y,;ݑZ6:#m]>0,ԈC曒6R`q!RsHN(wo࠭ HQG@ۡHFud$5?ҟ_ȫ"HQ1Ss3v|7^TN]e 3=Il={効kcĤk"uZ$e8|U`nU$snr+omp3(yx|{_O]VjS#-~U~xۯ^gOni|/Co3-(Дkfn$f;'o#[ٲc=spiBeHݭq9WtKQXne~r|`a}O+7ߣM8@xmQ i)^I;0.YoEG^BuGa$n/e}->kLT$v'8WT~ӣR@RI2b*.@h'̧N9$ˎ̢ ƈQzGo¦5ߚ(Q,O0%"m8Ev04q<U0ÙTM-6 Lo,P&|%>6\ȫB= _vtmGΜ ˕mkTh"<"JgS"NK1kLs#xFqfư ]щbňX!Gܶ-fvDHT@@ԍ S[| 8g'O^c_wׂWyh'Vsnu]-,Vq;4Ʒ9/MwXun?pd%~M۝Eh0q܉,$ݡ źGWT;Е[W>-%񈄀xXFA̎FA̝׼x oxaHrh+nOBF5~˄1 hκ/< )Jj ZIw+k%ɐYC7Jjʌc6HUj#KVӑ>&7yazXVΉ"ztmG+ ƈ(":YŠsztmG }(8=%îITG;i  ]hp0˓;kqL/!x!iuॣi2CHs5c+vu,˲  Q=̙2%Y> ,;e(IWnk%b7򼒡01XH(5wAek!;%D[ZOz@AX둧O^}`X=7DQr@ގ$:h%M3J fJ<}Ҁrkc}HJURk}3d h=b,E8rn1*I}4P޴MQNOBd#>M/GfiPtO|2$ ыdyN*W5ټUeM: ozn;,ҫ6U!.<dLH]r[&8ѼU轞Ň)X%1QfeOgX^3w,u>4+'u;ϡ'C'?ѽ5Ӻuf4&05CiD&;e%AX5߶z}^`a<8>bN,[0b"c Τ&niVQv'aGrF6i@wft!ť/o"|IcAu$ͭ%1VULF^0*MR`8t94󆦚YW/H~8R*L^alepŀvl"LW6Nڦ kLg8L[L_DbW RoRᐲkd]FȄ 쑖 ;X)LRRy )|l?x &SD"wĎD ne!Jj`;bر8m5!NX+c v<;HhLkɐ|&Ԝy% }HUE|N9d#x> 9.n_ܼGH?;a>:dгC3ͳio"eՇ!p<ĦR*|5F\t ka,4 KY$h]-*cƃQ3}jSW&!Wkkb]1 5JMFY{-n(e X(Y9߀[nX7;4{k)v2:6 D$6aD8vl.ϭ9tDa o 5&"# '}*oÆe =ðܕf0iV[{{;83v6]52p| ifs˰le`')>Yf+>s§1|c]C7^ь P( E0)AT҄hfH['gp|I_sP)Ίu8z"`YIv$J#r*g+U^ЎZ,txpG&FБ%مJZ*! ,B|Jq€n8 LBijS+I`iQ^9-5_7,fv|9.7,7zAjqClYj89'Euyj7g[p+5,eeH!s ^ki 8%\(豎Q A7,<g_`WϋXbOu^ ~bGx=$ hyt_JR<v :w1}t"Dр?}燳| 8YpRChP[gdϋR!L*W 3Bw.+ч yIE3Hz&+%8r }"7(Ndܑ[DPYO=[:_T'*@aAd5 V3!KGTلiLAP$;l(˻mh6L&3 4:[B**3 ;e2-)i)G . vE1#|X+HJ+{lk"4XL%\P]M%,P)U]Cك]fA`Rl SL+EsZU7S<N45 6 ŀZۜ$ػdeggp "ᢿvNjZ/eql4 uGDkV70Olccw)xX5c(E&}XCIEjGli}ͥ@}[$]v }d#(J~c,XV"+)(D"`U9z$=AWsB$Ns_ZB'*)o ҹås=9ǥ/r.Ke%q|3\~\vOr2KFC/,ᖴ"^偐5 '#ADTظ+q ZDb;p/DtwLx^8RYGu++?kWƵX~z\(}>׿=@ׯ?Exzq~vZ*Bw_ ҭ> ї7ܝm^ 5XSeZ\Z:LByg+0\fʭlt܏k8H @]ğ~ݫrSWtxQ2?ۮwYxT~;]+܂Օm}2R|{fqA3β]׿%Bx}9 /ǣO >g8/f@[>1S.eCSϟQ @wЏqYNWg2~5?X". ʺxi\YQO%wOe6 M7PJOyO0hէ/ʾ+dzɬz9Pd0,y'P ?)DTH)A?($7z3y>yjzO]`̗߿BZ>K\+FtBءL/8Y MBM9NB 10RcQ$jŌޝN 0/>ZOlb&t|8fy1ReQdjഗ<][s+,Lfk!~qC;$#ZD%ɗWK$yBn${I$wJ } >)CІC@@RQ➜(⥶(%wn,_ְ!0b`c_zJR=.NU/:u/B*5A]?AS\ )Qd<$%G<$fiW ٫HD&)I ,lsS979!ÁK#KTμKR l `D--MQ qߥf>ξK5!&| ?GAOQ >U`Vqɯel[YBSK۱:x~j,#`$M})X4Fg_IE.™E@yVPͽb\>3hW6g Pcq@>+8Vr1#)J֑ Xy}^珘,YGi/=aY_!!e{LFm玟R³lOo4r9`U# RLbssGCyPJge JDUdIԩ)T#$KnŸ'CT+erWA(-VK䥶Jꌵ BpNnfjW5gi@RDV.ʥ1/p_`J9^JyGTД@Y]OFuࣸ8_Ȣ1NH~E*IRw_L.Q]c){E0B}F{e_דSsJ?=d>!ސphe`=h} [fp9[ᾜz.s0џ!>YBK6Ӏ)nD->Z\Z㻡{$%pKUo+j cJ5upӻAAMa3l6Vy l U^,pp~YԚ8 F)5T$+΋z~"9.$A'L-D@ƚznU[ (NuAR Ѷw Vl2Kƃ0I&\;t(nKp>SaD)o'{- !S-mbTfqWNDt8j|DkLIvn"Rj/?4+vx@ c+n?KĨb%Y<]nia_VQ,x*F#y f^}#= P,e ܺt`Ei/jڲ`\?&|5ҋ&TptFxmަqw Ƽؤ ^՞Xs-6_}11 K)ip njH8]9W=K 1rJJ0 8T3c5AQ=Rj)MB 1_ h< *kZ:~cG B!P8P:*: `۱3)\.[3ЁGz䂪89.mIsIp/Q_іn8!ki֡Rsv^XCvk],B-2.bƽWLQ\`<W|VxpkA{ by/7\-^[/琒NY: )MAbE5լpT"$psΎ LGK Fvt伣t=^>S$9i@aoƑ;!o?i2vvq#p?5)w2I)U!rY> 2z)Cv΀oYHwEχ})yF̏Fމs>{A`wRm{s$΢B} Tw) );t4:HI\CT )jL^K-grp@˙ŧ )xIZXk&8~>RR۹|?AZlZ]wD"@mT s6䘌?${t6e6XݪI_g(O1*DwB3a6ј`,DJb@Oj SgY(QyтLߙUis^Z .,[Rl}j l x%}j^E)RPvvħaqHܜ8k~]-OsTv/.QŇ5)k8E xȒ0Q&i>]q ĉU֦+,Ž տzf %pg"kҗI ".Q?6w~/@Q2-rp&oFynjz; !cqI4Ƙ6Qe"<yJR n𯴚LfB pVz fs (=L5ͬs0{Kvj~fBf"Aqڣ2 씹FJiG5>(Se ^qAF iiAb+%K(EGTkIq2O%#n0nf8^0/˻=O9VGyɑuOsYeG6I <{w;L!;w$s1$˺vq,b 8xDr cxE-1?σk4%BX3[tA skYmDE.YP;#,  PZ̹@(!x+)(ĐhSl5EER,C[XIJy3 }o |+Z'1  @u<*0\Gu/ƋRJ(୕FR#SH-Q'c͔چET8Q=Sȡă}o$bBv`l<aNi,$T֣$b+=+OΠo//v1OKBhnWh[^ [H&lM ٚ@֔;x@tJeU:L!xpU MѤq 8Wֵzգk|[@*XMS1k;n;&'5g@\E"O(J@MTL<6NXT85 bD@4v 4!c`B?wƍʯhtsNe+J /%' )"92%q$FׯICQr SFJ؊bⰶ4P 9=xўTߌBEMsŏ˛'4C]/vtb}^LsS}oA]t֫@b+ qH8AvICJ̞ Y+~3h489)k1yE!$nجJ*8W>֤I0?kݼغ-V.ڶwT@(g|ۻ|Wb] yu%VQ,W#іٛAeyHu2`{Ȣ\^9h'R_0of&򩶭6PLfS"c)y7ǂ:61J۰lzRO_ŏK|{WگUC{=bKmX|˫F>m/eTUpso^( WJ\;o;Xywߕ6#k[Kk#w*޾1j%n &m;?ޡ!,!}@e5]B:lt'2'q׼aH?* e5V UOBUX/g;p%Q1x^.OoKw [Żυ'st*V0S8zh'굓O[;1߲oi^/8q_Fg1OZ|%E@J#E*>x(ćh3!1 %GePwO|1J<. HT7O|<|xEk_/*hE3թ?BJ>|dc(!WGK2v\+KBJ91^9rߒI샊{5 b/ޑQ0XL<*6R`W5nw;КA {5lSUOo~e헃leݣȂwl-`Ï }\0 {|~|hZDB?@!1Bf,[?;7QBM]!(bLLK}!+&9%ERMos0!`0RCDsXRFtmFBmB 8LM AYr UC'{㹅68GdvWЯAoLQ._/_w?.~:ͻw w}EDdrRڥƛ>Y5M*z j9q6>C{8 4 8vxD& Jy]FWK[/?v^k#V%Z޶ᄰGe+BɆWJ5tPOeWl5k&a &ŃآZ3B) ay;˄brY2%m ` B7mlDlُA$ H l3,XgĚRJ4 ;{bͳh:3N)(MT`Z^Rl;Ȅ*m>[6ѱ:ur\IZ\w"gߋJ r'~~&WbL(v\]Iun@+7I0ɹL±`DTJrr@I'/hY.~EW94b]` {JS Y@,%iWin1Ľ[(PE_iD=;ICJ` Sv%^.1C:%y &ut6t8^^D@̑)kQ_} :; h^>W[e#⃺.ɔt+ky'Y-H5 RPr~J۟sb:g@l8ib ꢇiŏSo4/)=`aаYQTP0!(̀.6eSh0..^&D čU## :;:.Fȱ߭u1e\>x,7Ae;hiW֫z=%Ei>ű{wt R0t?۔Jl]v3$^_>L|7wHFW#d pid勗N)7ӓ]集޳XΉcPfU> d49(xct&$QF4dU%XYPv)i =qoah49}L02iXbNh:Hw -)Co@ `0'L6XFH{ΏXpV#ٿ" R>9KKУYI]=I#]K@emX32ނxtx'+}73zAx%s}z8abxx_}5 zCt2s_5%ya3G<?o ǿ^_ߖĞՖ;Q9߰Kp",d ~\8nHf( $8b.Uc>)\ʽ7S,.JCPF"! *6𼡃\Cr!)%YX 0f!4Fz$S<\# ܌pMR>gMf9r RY1 i8 ThW: ehvlw{ ԥ˖gͨ 8 Tw%H=7pyma1 =ox*C֌k :G UgI{>D*bbԖD >Z<!i 8CH±wZ;{U;)v&Ǔd Xwt陑4?P-]ct#qP؇`XNgcxo?xolll.g:>a3`ʸQ'u1~ şh-(aAkKO/Ǘ1IÎZ40- XDZ^d̕o6FtTEs2D˚gcN@ _p;uaϻ/I^?~ިΒ?wt{n_><Ž8m;m:tҖ6`'ѽ' z~Y+rlZޥ?n(>yZ= >Q{ZYscҪza`A0}nD&*UkUSef'9*@Zf:ہrх)g SV͎©-8XrWYɫQo*REw0aVrb2 {">^!Fv` BW/1<_`ڋ[ŁFO\?I+Ь}?߭vV z dY! s],##DZ^? J\5$+-][)vv`E!!(VA][(PIzS9kXy>M*|tYP9yQ 6FA(+rr,etora_*rեse8yXZ]̇? ? в>JJ;B=!F![,]IF>.Xp`&Nz*6>AzKT1x4=LjmYЮA h7x hڛ˘+ݰ``{ob@{]_z+s<9ǒM/Xeh?o|;ŵ J*R77}߽9aэJozZ(h4 "C]_@p2t+%Wy/{= s_3]kmp$,5\ܱ̟ /?,&7Ss5 (`Q{|l[Rd1X2UT)+zȐז 1^")ƚUEUPb8++]p,$ )F׋u eo`1O-VZ|Y{aV ]C6l>2ƐY`!6>^oxl8~1sMBev$ [zGzXBs-:)Bju%,8bVJ8*rc]Qgy=fMpwu2w\.Q훐mxJ B(u* z3Vc]J-4LsW(1AYÖ+T2Xq+a,X t~iwUHr* 8YDb$%q^Q\`  *HҠRWN^XXΗmKdjӝh^ƟMsӬ8n5k%Wsnjݛkq6jQNg pZ HM^8<@PQJa;0`L;X VO'G4BЖ:Costh {TV|XH#=Drɒm?)#mx|d!\p9PR+6% TM:HXgnL`ț QC0*KWC&EegN\D4nk~.m9v" .8 uIadLg?SH&ZC2ӈ [!Q<h~{`3&$ CQg<2i0"s8ou:]>tm>`>>Qx'3^fd\ĠZ :&kD{^}`|d$_Z)<6NVebrzAdJ S."kP@Vj dgݡelc,O >`<7֠Q`8YΟ_9:0*v秸~?]g.;fLaG\5d hqS7%IzAc@tJ?-gY1ۦ>)ydA’(ɸk\0C w}r4y"yy3k1ǘj {zC/laV}10Y%̦Àya~y'2~iuV:z{ %$7t[AޚPr u;:ހtz5H ":wtr`O#}/bݝSʻwwF՗a$ hvY`ᙥJ`J-radZZmqNE, o<>9fX bB:=/F;nI$㜁%4#F[I9B b+S5J Śp%;7VOd ^EřUE92UPkg12*a60**X$ #E*Y+Xp^f NIyNxv Hǥ[p$aɁ|.?5zq1s m|nx;1],F|˧Ż*OvΞj/ۛL5?Vj|so׫[phYFO-k|M xڵe*+;/&ܰ#'?9~'ﮞp}A=e)spƳLaR= ڐO:d F*4c>D􌎧Lo0Ҹx7E?{ WKq|qN:'HD!E<t8=# @kFUs׸E"k0\fx,P%M翜=ݭG]T^e>q--,BsʍmY]y^.3N'l#4ٻq7>8~/aP8K2WXd6:YA8P 3WpfjR>vIIe d9Y(s=zf'/$C"HHH +2G6#!@8x,4d9 Y}oy(O' K%2($Nrb@`v.!.B>-!.Bl&}l& )w) gWY@j4߀O_P`{@˼}75W/l}^U#˛!Uë&b3*Qe@LO5jx{pY5ryk)Oh(W^2J7.MT`Z+&' fϛW.+wAqU\ 轻`ΞUU9gj>}M W$kͺӱ˙ꭔd2N]?rጓ zjyL?<3v^V{uNR'W>L,~gU\'D!! PY#j:|.#<5Oi>H(B DGm?U(՗/aZ/rk$ZY.`+.s`)tѤ@u!mD=vxVNB[1lP1 Jiyob% )e:(Jv+C} B:w~6cJ =2w ڞnM ]P?l .ö7YGtm <2ƛd6vl,d {k_gE )GTיHX$ /.p(BG1h*~]<08]ӀXn&0TBu9l&t9i̚#CE]TGevkeHR5ng Fŭ; ,4Pw.,Dʾ!q/p0(?KD!.-ŭcҨ$Hr@Dd/0Q o֬T7 _%MbҬP7ċ]QHʡexeߙ?VQes}cL̑+?\uq nѨbu/SGb2.ˏY49,&/29s ϝ3+-`'^0*hEjf}%UZ,|Yt"tvIY Fq>"_KGr.Ew:6p_3~֔~uZβb;'#$erM% rੵ塒8g 62|uHrd3a QBSqezsPFcܩvD=ޛ,)Zyd7^ʛ'6\Pof>l"ù=WvQW}@jjܮ}R^FyQ} 1R#ֳfֱz7 " aCiPoY*s=S&rl6ukQ+w_F7{n΁!&Ժh'tj/ﻨa7WpDvQhՉ.QQ8N 4K%~8AxJk"wN]0a4\*2v DRZvtD.%tM2E΄B3$F$BݾdzBJXŽsel@ ]> H%$dB$F` & u`O,7P(hP%QBf{o& UB@^zy1&q+N.P2M;̠0L uO "5 xBii;,uh!:.aTr/SB s% Z;NAg}Cqьk (cvTU;ǯ~X]OH6֌T'+t긕ܛ+r +'Q.57FD=IS ˄56x|Yneo5!v!fui+@TjH[a0%1=U1iqJ xB,ZzR0F¦Τ^j}CԚ`? c:T?YN…вl s~w&,#f61!l% ¾HJ^O|PJfq_5˚X~<{zi{4>|܇gAB_矞[`2vr>D NP?_LgFF|hஶv`L˭NBdJ"}ȋE(QI1ƄA5uևS` 1 I|@$Ru;>x]HP_>p1_!DIW% z+UU{ 213xӚ +EIDp:.1z{J+-ܚ;?{ޚYg8L;IUKZ =uFuηȨn|ŌZ#?Aj̄s by7۔q+VIBST;RW6VlEQb+qӰ'J:F%DY}ӷ>U+]fSkfw)Dz<&M5F/1H"RAX+FV(UTH 3sQ1Lƪrʠx'ʺQB› -V.\DQqGZ"薋(K-/y%%n&UA sDQ%EQ]&F] ̛x 2M{ٷ޾@zg?&TN sV=f8;κͤZv<Ё6n:ZEa0 g&&^=/" >X0Cϻ7K774]0 Mش([P`mՔ%Eg!gqH)aKګl溽^XP@:~g AP=&ZIqebJ8\gUL,Skpj,5 BxטG-/휿ݍ #02NI4Mq i$5jxn*`#eTu/9d@?hjKIX"0ωךHYES 3{ԊF/dK),= -` J0eKc04ҎpDX{Kt*j0Jש '^*0>;/|mii\RJ.)Uޝ}E옇P`)6SR`_|EZ$:TV0E QL*I$Pe [RDcZNSW#4 FlYVE~ys:4>"5="k#c%$BumS=nv3ŨS>4SuFQG3C-  T"Ba=˝U]*2g`}=3JK"zڧ^ onU{b7a&DWrtpЗA.grSZ {c'&97{J4H1Ì[A IB,4M!B鰗Ee<۰ W3g.7(,[!H82 4iy/(W^/dE@Ц.P hF`R;v{wd5Ok[n`kQňtbّ@t> f1IH􃟙}62%$ɌDk}4wOWG[4ʧ~G'R]x^dxx_׼Iq^C[Wн1?󶻴Ǎ?]}o)|)32~ekp =7DؓP^H֓ Uw.:Y"^=2dVRafX!Xf0>rL0; 駡mzKfY?>fA?L@ˈYD `Bs)Wߛ y܄> %?kWd}0d: <]+I-%Ϟ` d?f+TZg,l0S`}.Yz#l7΋>>GPm0~zub&0ށF,Nd~xcw5Ԧ9V_x<>s0{2vE_H?wD㣟$>##LJZObWpg-0ryj r"C5%#wK⼖L&F1x )"x1Dt$`<2m49}f*Mt@u'slCJFf\;V*)[9Y|pL_ۘOWW o,j }X~`~_f>P h`dgS*cE:LhI<ʝ)6*]3\cD%X%;' Sz/T[(`o@ʫ _$$wRm󧃵d0trhdϽkNRRn^OP+թRr1t%pkCf1Y =PRr0 <]ů ҒlǶp+"̶S2آ^tclTؘ" ]ӪYhnO%zPII\fzO@ꈔE4|{ZoϜo5)ǨRNОS띧Ma8+ū:{&``>2>@w>'P){]!{k OL>oOսvXHn9* 5v[/7Dl_|7pRS4UԴVITU\Ǹx> w2#wg{*Y{<e`0?>` f փzhԲ"wU];rT_ee[h*p4 0'aA|.'rꜼJ=m6s4ĠVcTq6nW[QJ۱+!`ž@~WҝgY9+%9T,$Gge4:3U3'+?4<گ9K̿ (qeB#8@R<^t3oyn{3%X!%6,xBD`'CoB\]k<{\V(U A˫]pc^VʂЗhS1;^kr)62MR,#T\gULJej N%Je*xG`_I[8ȄxCfyP"RLڜ:7Bo%5Ts#jB Zs&^'X841XcX@h~`Bmj^@qIo\~)/Kc4-`jn '8`+;YڌďVH|#R5#)Gb\Tf8x jBx!ԏ\_oV+N *::,[{\ >ՄhVG/>L3zU |8)Lp #5%Xcĩ(#Sf$q 3Rjq1>ju1z'AIqzO'd> /If6;o]~Ͼ&O--!:jh oDt1 *uV p7-`:;SdKKOan_* :b)ZO Iל*s M4ʦ:{_c",-2[Gv;t-ZвV|&eSdn1: 癠BC[e٭ M4ʦ8-vXs`X ʘNlUQ =E Z감DlJ_f7]HDxBvŠQNnbw,a!_FT@™l!,b1(c:uTao)B-ZвV|&aS$OJ՞] (N#&;gf5fxhZY*Hi!0n*,lz) R )D ҝH{& x 01]xOaK-3ۻt ­ 9غƋM[ +wEʧZ>p ;}<`Pf&ۅ&/&B` 9j$R5hhSSmrif&M0)*&Ep 5y-0K+;PIy/mndFF%%)Ts}LPw.}XA| ?[env&&"0P&14,NB4AXeAg-Lę lK(PxI<$n]JL,=xvt^ /J1]^J_Fa*\XY(zﭕx7?̾:jel5.;_$sa|8I["7Yie;;֫FŽVcaNPFzMtB LbN)_$9AOo2{{+c+s g}9_[G:pѕ?ڲ5UX%;ΨŘOLaG޵dbigg)$ 3texSUd$Rb(g&Q49w#|p,Ezz(ŃMf{W?iJKtKHǺ }]NjObY! S!<(Bg`i”Ac(+]*- e^2 E3"E 0'E* aP Z.HB ?D13FirIIQ'1wS _<ᒁ8"SC~Ǎ?%"y"T(ENՒ(Tf\qZ&RsaP6a -O |jhԛ}ڞ__~.@GnPCD.DQK$FWKAT D8i!HU\&()vS}(+r ?z#y^pBa3>R8f<c!q es]X$"9ONs*^W[6|afDRf(o]V@M /VB2cBEMIS)(@:TcEBoNr@2%L2r fV'>7 mQH0 ?^R`XQ*I!&cGa)Ƙ祮`nHnYk5S8vƿ뮋,V)ҫ^&`xX-VYf(21M |OK}/Mv+[,KE,k5XՆ@{TbqoU! k/;A[^?uzI㿥ޯE ulC2&3rI^LWZӪ[ jaQo'!HpB@KDWwl[;CN52IS\o,UZ0SY]n-<ۭ>/nQf\/;aNyH,^3TpG+͓)_:I)0|D3Y}˒hYMDi˖O8ҘbnkPLo"mGc$ Vmջ_*^v7(k n}?c?eu[]#^XS1"J*R% #Q)e| > jwPˡR'Cl΅"1] ]2ORʦ\i lx'[\O]4-AR`J/rye|P$RKkJ5*}&g~?nwEaiRJ#eePVaV_J(Y2{6lfq5fyU_~sAKr3=G\$WJt?b])/Eue.&a|H-C'P$;f2Rk[ޮa5ennؼ8MlS$;U#0MiGRWaš/Uʹ^5a  c\bQ &=P"vC*eJGL`|4oa>]6Ul~Sz ,Ȣ7V?=8=qOGkOS Swxo7YA$ INU ȯLD[zK*nxA#+Tdly~Xv#wEB0iܐ:&zRΧ Yqzڨ3&8,DyGyH() "H9 %6ږ5N $3fD6N U֟ ġ4LގRקbvX;ݡbA!B#88s, (f?̻S'3XIO$ؕl'/vںٓBUXHN~U!Ɖ$xlSQ0ʼ?bdbyzZTwtY>L.=B-LȦ^P)Gl]叮!Mo4K.[zȰcZ K9c7קb80 \xJ**vnzl"@IJx7\)}7̽ Svº|Z=Ts0ںi!D*^ qUU9 b e4Ɯ G(!׉FuO}5mH.%b=c$cm[U*lR>YY(T  T^Aos}2kSQko(VkY(_m%-#*a <\SŪ{.e[-B(fkFpxi 9_ 8~ijgrA8egr OSyc2JUZ]wl0ԇ܎i#FT3yU%:$){{ 0Q޼ rL!֛AZR ΃$*hٵ9;eX?(S3Vef.0cWX{*!*M# C/-7>fv X n I0> %F ʃ8h@ۻm}F;qKzv޽YFtf.U 1Tնa;:`q:Fk+,јtr-|x~NSBB4QAM.8J("PmwlD*@RAG%)!ul" M!Fڊ=a+9 pa!6F{'C'Ȝ*^oR:b6=fT &i}źbg-H }/8|(/'MBZMiI'Ihrp;G6ޏz#JTciAu51Fc\8p}H׏m s-;鼹 x |^acp)q&?5p x=]&=S dAq=k3ܩG-aNqK9 <`xP6:)=gں p$f$PwV0,BҔ#6O]<%p".\lt\TH$Ftui5]r ұLm*cx]a`j8֑P" Km mō&o_ʃ)rL( d&O>5G 9csCQSM!kAZ#S.{9 QwwrHy|: v4~6͘OP̃D(cL! Xb~Ap%':FD 1Z'@՜eU 0SHWS1w;1!IĒ$2LƋ9!uz "$Tlkd)MI{:bk:pчó  p؛reˎxhdJ2_]ՋӏaFfdȾԫZ=,h0˪kZ(פՙbsEP}R~piQ7y_wrfK^s pg}jxyQLz<:>=rݯ/շ0j\cL׫Χ,6K\7i-H&9Q ~bvgDs)^Z<坯qX0H@$U 3n7| |~,ˋ>xyQ/Ta;/`F<aˇg*?V -EI__*Ȕ{R+P"bgG9{'*>7O]t*`QСT; yJe>E|,R$U얱3$N!3S`{P:KmeKk jO,Hwn\yeNg2 2۪B̔5CXjP'"00 c8)\#1hq!FHx(Є}Fۜn;NKi~ }]J>loX.$Nj+:8C-VIqf˞$P b"P sU|"JjwT-MǤ('~mdT LʎJ k tlZ#H bQ0d5 i]7|ws|9`8ǫf`!+ ^();tkQm:sg瞪_dzs6ſi_}XUc >Ol^G͝׍f٦A'_0N(hYqC&8 w*Ղz ~U/>dx¿bu?YraU*i5k"V$,\q{FIwQo0b1΁1.~lOK_'!_t+ZEYg *!tEj\tg]RFIV$G2w]'ƀ%k9qb|g`=L)Xv)p7ǻ?% R+dCXWaJ'!|^*[n6o8 JKuV E"L,KVD~FPӍ86E} n&dEv]oÅC |6%\hofimO}0w~@c 4So~B2:Pm׽J `۞ChzɽyBxrmf4bUE>>2QiʻCiྴ2N\3]qVwWuZjP)CkIJMi>F}|\+dqX6>JXi^_&B9NU+iH: v"p@ZF$0^cw WuRJ׃Fj +^Q3Nr;S񞞀? _:xu~x\,H\!930?\ rX3~Wk0mc0n0Z)J {W3CU'V ^.JM(~:v~}cŜ1$"aiDs5mq{.0t5|@t-8`tmL7̓5;)~w&Wg] ͏7.J3$Q,|&\n4_m,N`Ѩzԡwq,/\k-G`yeaGgzGNCy9Cܝ^%ʵ{κHrV"e=CXV2U ' ;ۢn$UwzsKRjUiˇ[{ M $^giӠQv&h՚RtSJ#LM0N9$X}s#Y룣X>΍A}#zp2Y룫aQts N > ?*VS z,FvvJ[?kwԨwggn9GOwɜbØsHr|poL$L B5gBqNW|I!5PkLħVjM_i~tWVXMHGn1˅dJR/|nSf41"Pbp,X%6HR8rch8IC{O-'+% W^+B5SzG'P *ۛ mc/Ό*73s#K, L(Ji̍EZDZYP\m$YBHc*%Mbm k n_)d2^l㜑>ߥڕQz `,b)WɣͪxM Ux]v<YڠCr3Lb|O}/㱛=?Ϫ b;9SU]ti{d#P.~jn)fғݹY⿖$+F2\K5^Mrn5qxH>~_:ɴ1!!Hs+u>oZ+R/E<ҔU}ԱVNSf44+Z+mHW.S3EMn(ɇY|9zM i⏶jZ4D]&r|*J%·L2IcxwC 6Yn>1#Z\flhDwQ4' tm5&:L#ymWD%AwOU5^JͮtO\[oȼ"կ5d&;M]:=ǒ/z5R=(BkTנ#wFCX2[%I̵j+&;^Z\n?i&u#r`SL82F#Qz Y`BcKO kG`.j>~MQEf6*,~ i FKTCCc,Gݞ! L7Nf 짧@2F-, md -wQN8b w,R*-U-V `/9T³rׄapmF =; 5E" eSn(Q:L2 s( ,"mlM &wLY.Jq;vLΠ:]}<$I!}~ʳzƏhs6JXݵ/~v>3EL*DރV*/:q!UG(Ҹ#^]  Kpx>Rg/ &%2TRIcLK@~[K=H>9tGW&`u5^_V` ˋzk94sOC We/.~-bowҤe^-/N\ޓM=Md̋7\/pyw˕Rl%jZ_JH #%9SB%"ؕبZA_}r՟#%,,!(Q=k:6$@ae2"Fʐz|uQDy0S%T!ͱL"dbqAm"3PsO6̜_a!+yUo鴷uH?֡F\rtr5`4s7M @hg]u0U{hx%2.p'.8]w3XrgF}eTы{g|ex`>ҢW#Vŧͧt\qFϫFDZs9&%rY.}Uܴ4'WuIL uI,+px*K?v61>2I9nX`Xyf QW$/8 ^>dtnSnf ""D4_^d0|8YՃ8L]E/|4~ec1l$ 1O^q\K6Sr_..Ta;{ b~$s8Ja,V5Ax)baL}x5v[z0hv>ҳխngu/8DژI8Q\ `X UB"\$1U2q"eys>_74cZxPe,he(b(q齒ogn@ n.1VmOS)|=tr{be cƣ0h?XڪfhE|uj؍(!V hZb&T زFv)hrEb &+a<"iBI$a -k ֡i@oBcGP4VRҀ4f 8ɂPł; 6S+K}eb) huOhĨYcr`= coEW]~HjjQ!zXP@:=׆Y$ f$w:=j 7M܁WfP9-Q(WJ‘& ,XE,X00;5R[~1VeJ̀ $a71XH'0Q8NgasƬcSUmGQ#JOS契pBn2zy`-0]w-}57qTgѭ(&9N} B1˄#+F<ŔX&50S)%=/x^ʞ e䵳uT$!]Xԏ$R7޸ƜJ 5K^9gÅK1%A%ї|R&ŏBCPP! s+f^gWn}&_l|^UwK|7}eb*fGkT]^\_LK"ߏD0H`%{x~<^D.޹<9; BFBziT~<f~Sn 5D nVPJ/7|~6w wQT 7|?59lӼ3YHa<(ۯ>G4`nG= !;c*,!,-ssdENSCMIz=u,޿^s2K)GU'u?mAܝAE,MBGv-P\;xCCnx9Gݺd%1::%ɖl|%:'qܩMco||-EXjהb\O1Tx|hz߾i=)2##QdB 2P%ÀDrP?&)F!瀞b'闋^Ţ:P5 VHBfjȐjɈsy$$Nh"*9J_Qu0R VUGBKm,g NOu mfC0`(udg%6Vb#pvB9ZPaob.0j҅`6NVD cE[RE[) #u8J㢫ۀ;>ِZqOٵdjSbjy)Qz6q<] Yj6B=b$UKgSг)c;/]Ys#7+ z%>Zt_@]הv}"E) cew-T@ $yxWb\ܖI%?V:J)^dI*A#Q[Pٖ~]Am@hD{ f+w2]\CDilKjct#:($Zt wF^+u"ORWƃjOC܎05=Zچ1jЪ/}po wg Azn9ڪX@bpR7HU4qzqF ݡ XA?8WAɚCӍou 窞Rjs_T#0:zuPyCgRxg5u'T.$C72G6=K|` 4z#R+spwQ+Kxͷ Os 7.%=5/]s7 {wG RA%mCcԵ, v;זcRIVAeHS U?+hIQg5@;VV-KboN!!N*C %׻="(IvoAfYB3V\LIpV3}vܘn2kۇ{IK`U#0a !h)H?pj9} #tFC FApEj#כV%NX~KQK^:0(8h=33r)!&fx*RAKͲ|%k1f|e|s=ǥ:<)n lz?,rvTMVr_зܟ*Hc,񣃠Lpt`V3_uT\*J`Fw yŹlZ+t!GנG*$"@F_GĠ9)El׽m.|,PPѼq!g}1qКlЧ0B\4"a޿1TBԆNx C!h<նۑ{BF[L` Q N#"s8E.J&8˯:2\:ۥC)c QܹG"f @JoE DNINc^B!EL6/Ej)cBh72~pK@{T'5;fw~ۇxlgɇ!ޭkĻđx CV3uw_[{˹eOE,ӌ*PY`F$JIBS8%K)zYlTę6Qp)XbA FyBUL"Iq3\1La1ewaq2v(U53!em6FBu! 3mƗ@v*^Ixܜ1Y%-C ..720DLT <˿yY/>c5sCЅyy:GzC/p+2m3QjTl)U ;V4JQzu,X3~%=&y6[XPBIW.^2e,y[$8G?^}s7BNZV5|F @hC'Yy"V$kLZK]dvgc{<'uB\H ׷nÇn/_̠X$cVi6#4ZᮡU&A ӻ=4zqOpVU R9́fhHy/ȮcD|P%>d$wR"{^wbI# rN%\"@ H|߽yD&1)#~cJw)A=?Ȫ4K ؜5>1YhO rBV}tce~UoltpÍ6 |>} WTƪ1!VjtBf4g'La*Au'lJ  hCrN$iq7D|S3Xos=ǥw ][&MccG ϭ'm%ЇQt<_xLmJ[+V>D5Bcue3kYY Sr{LjawM&bJM#Qx f"bCAPjJlhʯ}UNM$IE9)BTq55%Sw  .!li{&[H%D64v2 N> w`r̿Kވǟ8|w7p$fX,H:0Ye\QZq.jPa6m ~U "XDsD$*R9f4N0 U}z;J$E YfIT"⹴H> Mڱ8LCѠwSF'AFm{{״aڳ4MwK c|Jàӊ0x*c(6tho_nc'7&E*kQo9Gӛ.}mf& {0ȨB}fsKB-L@U\vļhYV$XМd㰟0h5ni {(Ahy W!HкdCrKAG?Jhh5 `y6 'V+RGEbGXPܹw0A@{=hݎ#Ҝ'",B?w^6UjT'I h8j]Rz_v?_Ye d xz;3wH֝6 |`PfPC|2N͏Qiɋ'/"12&Ș#crlcd6F %y唦8K(XQ…4+T0KY42A^82E!ZDb)] M!V @uJNAC[[eƩE]* ܨb2w7(`Q1 PdXh"8@A#eQ עjl=DO9b!0Z/?Qd]wn H9HD$"Q& d\ηX?g txT;sii( oN\XʽrO'Fi?LE$Zy:'^D!kB8Q27̒1N= l/| lDMoՆe,f r q9J37 pĉU&PtJo]6` J]!^\WVMJmi{uVa.XgE2-CS 8czpB#xPިX:!h]ҀP8 H' 9=!"<*nQ?P8D@ъ"ƊWL `:{+0c@e@˃'5;& 뱽g24n=^# ǘbx"ɂ0;`d+̷hS ?\㛋uL욺:V#\jr풞3vt9߼Q QڂOEƑsI2E U(\a& џL T# 1{T.}#xx̭BG6)A[<{Y}3_j#_|nx~9l e4TBٖ+IGD23J p%!Ȳ,1eXJ8@0b >*Qm-ф@8& N@Qr0MRV?HER  !`~|֥M ))OsHJkӳqd:<$GƳ:A>b1OR_YHƹD1oFoZ͋7]iΛ~ko3nyOWA>kzLC-?.?L|a3,ȴ> 4g h$'4yqmb98M3,}ʰX KZt:0|L& j:PKg/ldBtbN~j;*URLXSϔ&@ wDp3*;6I.ʙ(bwJRrSGaZ99c8Vtћ#dpKmqӴ-{5-[آfْ嬢]uwU#*4Hiz 9^T<>easE,(  \`*dʽc- }[yLQ{cU_bry~~s1[Yc%bayŤ[1aXN &)Sm$a 0L%*r>5H;!s,?`E$ oSO6K%<QI o+ckUrtp^GBRsCH7C&kw@r˘#Sy-|6UF)GOH[1cVC`- K"2Ee $Ba%-D0:pH:󓎞Tc?XAtX=-@Fyc L?̪JuN( T\ޫɷ(UuJQ<[@Z2 qwYxwլRͿ -:2+.Ĉ`u?bԔ+C9%CŬ,![t'2g}2}؉ UP^DeqD3H\нlN~cEY=-rOD5Ad ćb-|X88Zǁv]]|?! cAe3ޢL=FSkoN}Xn|y1H(w}RYGsT3H|+v^ G}2D]N,m\YY7Y u˘n-AB}ڕAޗ| %:Xr pjnn  $S _4hlmOM&7s(RK@+OHp‡RZTrL5ŜMj9߫ysk9߮9R6+5>6gq#G( a5XJ|.F5˅WOgyicD@Z,6o1cdm[`B~~D~#BUO*B+Q%7YӃ> EC5Ftr4ٿF4A YB[#!C?H:TxKT\HbNx$W1! ,8)'$49c W>Ȩe<: YS#ТCH<P&M!pNh.D쵏A: 3I'(jYU`%f$jw!"VT$SEWS4؁*daH<LpUF^9Ǚw\r&DC!FkttEmTpP-/ql\zV=w#ۜ{l($S480Dz ;<8#^*,C LLQ"XL^^%& i9H/LdvV$6T0(PWYB Q95]5}Uxkp]SNqHk5K ޲T['<ESNcwU{L]b491Wj߮l/ U =*9nc,=T sؼ9>![ZbGҶWhv8!λ9A{ozePwuYwR| << )hz_o!RLAKdZ. ELөoVWuV0#إ4LjXp-'OlO({Fh΅^0_j{L;]#FtOm*>}̗@ocad~^j^YK{>LҸ/@i?k^VN aNGO#˘jZZ6҂L( p%) ZRAq!ՠ=sioց 4&CְDacH8Akҁ3 j:]rV5?j΃Ep4E4Uuu HHAb!d`XaK0ĽWBÍ8CN&B;{ 6;GD-Խ-MVK]XSD'1P$$K SbD.`)R܂EEQY'q^>%ux5߭HI^eq0;߇wGtTM$$yT7W'gd17C7WVs,^P|Ǥ)EZ$ه?/ˋO^ZW ADLo_G$׋y{sv6x| B/93׋V~rƟ n[H2!9goOΏ~ }4dM;W\Nd mJϔ*kՐ$w4^,_ \y\Kgݚx6Iǭ}bJVy8_Jp~wa11׳|0"RSLKi2dj9[KpZ]2񚑴K=o!iDlw(Xv\ɶ( J{CB1:iN!~X0~/SHx+/ڠdK k`@"hz~e2U]|?LP:OO} {O7K5kdo=a0?S bd DX"Ӵ]ƸCSeF{CߛU;f!1 îE 4(i*ӱ_E˒uUːoˌ{ψDp}crOAʓ5D ]+A9Y#2MD$#1 \QLs ڒ{ȰȈM 0_훵, 6'3/6\M=%9MyOZsI/U&&XֶS(ϐO ""?>|U!DAX-_'v$P1W=%vznND4̗R\?=I$ߓ~O^KY38 Lu0q= ȱ#:jH9Mb0i `.OTT+,}I\4[|JW+cVR]RRhH΃Vv՝Eݿn4Fwf!wD Rdqh"q"pқ1}F$sIޣ8`W9IP}g-"%x$ =XbkMn6W*E{]YF'.#Bjtz⭲Z'.* -h ^׉F2J'kfy0cˆtm\0\cgؼ;`sfSS'UPZ!4.<7@ 6R4XLFRBp{ 60؉7IWD{^z=euH;aK hTW84mY#Ea =9b4F:&@Npp.B"#BH8C  qaBj^lw .fgGRU̞[5 Gd_L6`|[8'SDţ.%a TNԄPXGvE( Eq#g {v Bb 0F7WzBHʵV<0uʠ9ixJb.%iI]CŻv; Z@w+چoFعGa"_)ҼU1dݴI tdO{قy`8?252f7DL"| ɴ:Ե>9lI#gۓH.hL"ŞIsMW'Dnз᪅ЩL{D߽3(NlPn&r?WːOt֫ ve] ; @E,aj8%L1 (P{Px9X1j>Fw`Ұ撸M(mHo~՜8ن.לb5]=׹f| fuLCf-Gqȉ 7r2Ealb: Lj34I܌x%̀FF[)z`$A&Ab2m#rIJMT(S|OP/ $UpɍkGLfmt!Y1l `8h s^`8ݗ eJTκDXrv׫dqҵ2Zۨ0;gWeN󵚻fCEBbMp6>@xS]{QyZzo[ lX7'5i3lVcчdvTm~_'tO/Q͉d:h]7sŇ,׳ XNd.]9>HG,^ĊӹD胔;rj wDg4%Yܓܵbro-29f1K(XF# fR 8TOXN|NlYϒZ3)S^>8GF8divU]ƃ' t G8dpxv<:w X0ӷ ;q s/~RJS7ʳB*$w/2 g2 z?I"&ܹCճ~zp&cI+%s8YVO3pgc~08S t+ $ vS%=^!Zsfī]$ϲaulF#EnV^Y8k(SZ#i& Y6 HX8e},^!\]wTwsՊjC~nP4@FYf+aoo1:Vp& XE9 d4'NConF>mݻ)1"{v9O'|3y"oƮ$fcFeidM75|tڛIFk.?-i{wuQݬ.*f9%wG\,qOdvB Tct W律rA?4T Wiʍ.B嚑k~ TW;!?؃; Tb jwlk 4KI{Ӌ!5>؎Aq\jk" Y(6K dh0dMkt%.u^IPBʧ6bI3#Q ī[$an+@xO^-))YA||} @+X 'aZv,ׅnp9J* -ƾ.p]M1+Y^F1`~3CT]Qxv~jE拨Q/\7 T"Xrg`L@m H 48Vxe(zmjUo5P $]6?Nk,^Jٶn 1 A^(z@VY@+M KBN ]2*L ~쵡HJ%MaJ*jivX-:egP7*bb_iVuz)U\NB#@ LҴkjzhƤA;e)~'.LL;;ZBskp2ږ"^ɾ{y})Jp%x6K+&|ސ5-xϧi<ݟ mzZ0x<8 o`5.OFws6gD 2/̸:3Ef`9(m6;J+`'D"z6R(cL_۶ ٓBKQ}5^M} >Q 7i`Fk8a];6gՋGFb{e3,Zyk,U;JP2e?6=Ɉ$+̯&8bh]fD2n(#['?~#Azh{oqzknmw8o7un/ #uJw tq YSĺg,ANmlp Ez;%gilZw^MەCWyoֹГP^WR}%U]I|2`6ǣGߍS 3Wo3T]ڴw7)/iq Y4L;:o˸;oXM]nmR;mӝ1LwWt&Z<~㩤) Qe<(OZK1s2[91Hl?/>cSlqp^aᓷ<}?A7.y]7۳ry25&/}!./=ǍKD^j.܍fP4" gsRYvՠ<*S\GopUߖwWSt~k(׼<l3N_<6EX1džy_kWAC>^ ~:6l42 0t4 Re)O*_+#q=!o\XfJ2qml]lKj@U:j:NdpeN:"|3PJEt5|*B%?hPP[V5\7RԭJO@$w8J_Rn-J2H ȑI,vlI'y@JJBN+霰R`վH=v~eӖPj@6``UYaU* 7ܕUF3J(m%!1T\QxKPO[?q#L Oi٤^*QџrX #ʉpO'YF\34[ZvӘUQO!ߥl7*`BʦFƤOlMXvY:Ҁ o)bG+m#,f8aĞN+ME$+}.:M!.O>;˫VJJTKԪ))$gI! H.dzgDCj-<}/&|}'Cش:ϓo1df@ R3Z=`F-J>Dpߟ'dZ%{nZ}Y+p;W(2z{-H2 9j ZwTز)LU#ʪ2y\!s=jt’R9; g 8wNCAy_%Cψ> wq]2`Paa_<kD!+>Wn<>O oifՕ?bUዮik5״T8?cTq oqUWx ̂2"dMG*ʭ;f0j2$P+cRnhQZ%5̒ V]BuCfww;ԉ}b{O@ ?f$`| c ~v^ˀX<'dʡ1mR:Tp.bߦO=k&8[US3V5gpwAF~]R~d 籼g\!^1%$_O^RԷS;~w4Tؽ:_.-O@~xއÅ{4Gg!r}nx|oFY ̆˻@r|jؓuX:5G瑒IKfIgh PN z.fv`@nʀdo R _dW0̤4׈-)=WaHf攲URP*W C1=?g&EpVbE+YS:_i!J@]6>+T+$~=?iY.#!3o )'.Г$yLu=:d>?*d-|aoIV@~uQ!<:dlCpr:AR?_ yEOC?ϹIl>J59(kn SbI)IZ$l8ąr$5 9f,p&siFQiWݗrTWgړ[}yHAn{gQLp՝Acwm;},dޑSi͵r{kj*DG%t㖥h7hJ{ͯ3Eë窹y-b'p.\͆'?975m=G%/wm !]cֵ1~ )H{Hqa\)*uҳ|\|(Š5.Nj U,z}Qڌ #oy[\sfedy[UNƻi/0!i~=S )Z/y\5CyձUQEq Ǥ+?ف DqQK{V߿y%7W|fsSKM/~:yd- X?Czh!pFhMy28NzX8c|`@a  B=c!LT3,[8S2Nd[Oގ`G(>-. G[/2٤Mx~D xA"XL%Oe TS2!p(G"9N@Ԓe䠀TWl#HH~F{ `Y§U$7:(%6Ǫo`md}4UBFQHʤUHr kA <>xj&Lـ@**ݯ)]V+NN};9|#QxG]\YDj]{61ysq[†ɓ䋻AEŴɟ1QB ;-yC:c hH;6Ԣi/NYGRQ@LQ*6@T]N8P]K9`}^2NP kZ؇C_|΢Pr;x.1FhwxK+ G0C4q)qjsf] 5ߦ<9/#s{ b@MLMj0ۋX;jL8ódpv->LL;`b^l'xfL1i1!v-,T2NN'9bxlƎΎ])$ Y|=ib$ 5 7 i0acpmGEX+ة`OO<ک`5wjwF2j,fK\Xɪl8s 9hT؉Rb'?+F&NJC"ۉ^麵p?{[ܷ75?&yVx+l!VsMKe+j.bMW5ojɜ6+RX8)RVJ׋*OHbH KAdM&_ 4|Nh&_\!zK #}/<2Q9 /|Nxs‹vNxQ/r :'܍# ddU!5PkW5#*%o:m#Fm 7:*bdDt 𳚁>t,I#*%56?(ͩgUX."O͛8+c(ۊbr1JŔpB %JBVU1 !]VDx)+[2Ԣt㦑KDQK+i6U,ԩ1B$rh4Rby`,HmYYFkJzM+cP^c ]Ǩ!gN? n Mm "Ă:]d (b5Սe8x7ywMƿ"Gkj5oBZQV5 uW$.JZB4sU:3Ҭlb5jEշ~huiB87 +Z_hnbBTɅ ^Zpo#ld(\Z[kMuA,X8 p J5Y|a^yE8m7D As0R,0XN.䚤.иw/{ԐAM~S~5B# %ū#Reex^7f$<=/`  BgԐU.;~P.fƅ>̈́N,cъ4G =%z,_7PG6Mp ZpWJvj؉-U,xD*8hTūH9,?:Y }@*#/PIV+{Z: \i?5OЯќ3J][HucfiE"7*edm>6A)9owiD?0*W~{=ݯ]r6 E01wdĿA˞yNr_Cٿ̧Vs3>Y>k5q)tg-'hl]t oVlIjdЯD/Vz kWw&8O7LZBCϴo5jA{lDODžhxWM0ѸBZKY[V:z.=syHx5ZcVs_>xtD맚kq:QrA~A`u(z&NQvr_i;_z = |>|HyQLtlg[;t|wBz:`a8:5ܼھ]pD1DM)WڂAuˠSmЗOLR]1N) yg'NDЮ)soWKSiFf; s#@3ț+Ug͚Ol5c>;=% )Cؐ)~S:)nP_=>pϞ)ѷ0r>'k_pC T|EBæPCی|S3/jɢ9j"QJ-P.0E\5"˸߿}|踱 @ڈX0HƽRĀVÿ||ҎQ0~\QK;6Yr˵8"{G) q3h 7Ǐ s̳䂋V>JG4jMe@ \$'mdU/AXֹ4PaсEKP@4Z@A:cl)N;*)ܱVX7`tQNƈJBjUſ`r (NCaEc F`tT$cRi+5>ZO90:p"-#(|;, _p/yNJ\p@I$4FCD%:-aNrϕ8In$Ls ]B`Z!mĨ[Э\t\Q?Mlcq|JJj ^ l'~d}傕)c{]>>'H_ObOip惿34LgKYz?A/$!ow03h˲F."4 Ip\cL!"^*s+IŰ6َ$h@%rr%A˺s]-eMcKwTxOS$| Еp&N#@$*fԖnbt{kCg|_Ǐ3BSH+@yZ橰2~PЯ D؃S`(e Ix7dm(]lEl>"8OnX mwPGW^i<CoHt6r˥OJ8!z#q@կڽAZ,% r@#,`Hum T4U9o³+H>AZzw%^^#>1QҠaS u4fqixxlmUDMDyI$p?w!^6Ў|-g]~j!-?u7%QrDm۸: xh5;rA9[ .$bXj(V ?- %0jCB}w >I5/7 jeY gi(&rU"Cg?Б2aHɩ[צ_  g3sZ7QMD =D/[2?9J1T s0p 荳vk(Ï?=14{Y . EEBM J[& M?nҋ"`a![JT0FL2ew;Ӊ@kZ]>&&aș&&NsM$RM5INE vR˓~I)-\%4pm҆.0 J@\P&|L8cĄevXjbz0!br%cu<q& P0(ZY7>45)ZC[HW .[ ].>jڝ<$7HqgS\Q#bDؑ&e-p9$ o 'F"kB;Fp"p`'On3Srj/_ ;VwPENی;EnAPL"G^ rtĐ (3h95Q-#Hj@79fhmKp.2Jz֍En1H""~_J9;Ϲ )j<td1J40d5Pb' e2Dh!)F3XVCè|SHQ*+i5DE{ZDzuц+$rIo,""DP-:P\4k:\a_m.qEG`iK4Pem1sv( X_E:◟CtSAma[3g%W^FD5N),#;/.<`&l56M3xL&TA念@r#O[p2blB' &A%xST8pD":4`{k:i#J6(A*)Z'1H,9)2.U ΣX61%%FP+FDpYOSrLMQN0@ NjE5c"*gn cڐB3E#EeP1^z.E" x ӗ nNaHƿ;UR(Q ;őwA#hM&e}dTt.E(ywK[Q73HdhhR- %T]F7;QSĈJngW۷[m/#hofן$_Q /{HxBNG=9{|& FWƯ#& Āѓ'9H|Y]D*bBE>X"|ీqS[xVwwctrrI d7Ig82(#sTl&~ f~DYyhӼ@˕Zܡ߅-@ hA~,饲l{)ݐ=Śrs]H[T6/@/2 WJQspKb rbəaEMmBQzժc3`D(#Yl ̀c6Žv~Ճs8^fuk5o*U]>C!8uNPK֪XQ'% ~A椵JLx$&C /p \S{_u94'#ƥTDusL3 wf[p}1Y=F^H`>^MS$-h2E# v_BXX-MCu纤i(ڛa%liguh'wnw7;$zY,^} mppqip ŐZŇ{uq㓭lj|c2pǣ3q Hfcy'{g6 zc\|H|5w ڿ_ Q/x'{ew>jm ;a$%sF"pOd<TQ1e]`-ęob/"B,NrXvzog_ZIkP^uנSp k N}ph GO).aPᕒ#BKinqw¡3\L>%d;ίrM5+k,Vh݅[E+k5P{Q^khcѓxJ5$ZўȤMI|pzWkK"%*(-ѓG^P9B1E?\Cp1> hr}xTۤZj[# kr40XV NCjL.{h:ϋ n~в.>*_ZckG7\q/ھcAyy@UtSj`nt6^+W4+Uf{[Ш8rQz e kN]4sz*?885 r:ҹV߻E3-mKh&忳^>8Ij<1vgW/ZݛU^ٻFn$WZ{dp₽|كA_#kE-jnv6Av{t ZMA?. *ὝB`'EwB;Ƶށ (K243-rƄtUz+ Q]-3 JɿqZJ P4ѕ7+ǘRRxDJe)XJZ%kV$u tGT8YXebJp2ȃֱ~gBsf;rao/n?yo,{\Ὃy<8-MQΣ 'p 7 Do e޲L5Gcؙ?.ʟ.IxEz8Vܳ+"%YLY0^b; Ly]8 b28qOA]e/Ά=qZkpc/ M,7I>$Jlv |HSb )c2͠:-- TFR -(8KdD:b!֬"m&Hw_ Vf|̉d PWFGx֌T4GMN*4}cGwk:@OPۗXl=L4A%UeOjKDAuJrXaE SL+Y8u{28jlw}'x۶%=6]&"i"nܞ4uޔ۾|܄Dy֍P1mgһ"9d?'9mC.t4p19x:84SNM^5cTN*A^',]tZw|n,Y+Ї. J*B5i&lbn?_fKޞ.khrizW0 >^ZPIth0'RᢺGop,σtne t*.0MM 90oܭy/,0:tzqu[`Rq"ᐦO@ZJ4:~ =p³`BiwjF.$"kAѩHHTlvSFq:*;@ɴۻ NҚ EG{/іQJ(l s hTr nYcJ漣BUX+90C0ft`1@`p[nD iHDfjXٛ@A-I*QPc)E ^F!x 16||ai +Pn`LYAxjTc&K")_83=J @2>$mIq43.f>p1sP ʀKƻn_\%`th{d7=Q:F'hL'!g Ɔ0H-ٻ,;],IgцC,/`SfH$jrnDڬ ӳ(aΌz/&SΌ^v/qG&'S&n3sG'iW8 y%rՊɉWOOf ؍QDyf{õJ r;ξj-FS}ENȨ3jIᷘϗJwol,u=9~>xkX x}Uw4!tB1&YLgLnFNFS\u^(5gJZuLJƎХLJ֞g|paIAn Ec)_fpOA'`IH<פ)/XOcˍݻ7v|)q]7uxq9cZ>X-M,IQ>qC>`\(z $^y,×wV-aa|~\k [̿/o=\m?Q?d_6>1c;׌s_yj(|we{ϒqx Clr].!]8ab+()%Uo^7/w˕C`ԴD.pxX Im;&P8 BT FQB\^5B8Z)8eV#AM gkѣ7Tt{$B]ђk&s1 ,-:{0Yæt^ &{QVOiq} Oz8Ջj@^ H~tt;zĻD 3&)JV/;8ݺ ĘMT {ͬd0^k/~~N #p3g^sZ7]Шݹ$s4`tއ?pFS/:4e[) ,'m w.itPݻnϲk۾4֑x)A;V,hC huS/_sA4&wvwc)i3zʤ{]Yyۆ?|6`։V84q>tM̊. +.ua }(,+^W eÍ򝉟g1HĎť{ ʓU87%.6ˆZ0<X`>S|J5[/vwR&cB({;{Jсٻ̿)b#DtY)Oǯ$R?KEb|۽HJk pF1By)*¬`0_l<=SHW&;liiHl@:W ,l6f+M JBA1B4ڲJzCi!@GP0RQAxՁ 5̙lۃ'"gT % 塔 骲t>pCKYC .PolۃM$"-fHԾV$Ь_Vj3W/wwW`m_R';T @caQ0Bcgzܸ_<,b'XlbG,͑4vC4]F-ROb*ut?Q/l\wwxD@ajh% ӌ 1(/^@ -k{pˈurӬ\9>ʷU ? F HD ^O3`]3U9AX[A\HA&JG CJemA[l!'ZSM@(MK/ڕz[^ '?+/yfJÏ+/yZKowKٚ>'4r$'秇aykdxD8>R\ #iNňx?FRPm~;})gU;Ε 3DA v+a &S`JQ\xҪ`ӒsR FjCLe)eH.A (ŜU7; 1+L\^9#%:xm{eg V''G n9K^VuQ!k-[H }J6<}JdB()x 3׃(\g9l]cWcy٥[=ܧE*p!x' IbEXI99r% AQ8l8B9a:M㤼@lnN9/hKO61nI.nP%( c7(R,ZGqNZ7KBU Z~:I/ hK/ 7Vĵ7)n|i[Pikg0u/ *ƐZp Nۗ:=$P >/iA۔nB k4u#,xʧ./f.fGĺr =?q"G;~,TwK%=W<)Wg_&+sw r5A}{a(n˩Z֍p]6N3IͣqvTʩ6[l*TzXT4g9g'`{ԕiRۭ&a>k\Ov05ww٦;2DSsܢ={y{c;ڕS?tfuV.;6e<-GSۘ;2z;q)-YKWVC tXS "ދYYzNAMFz2˱۾| ? k; ԹQO 2QR7uC[o6:FۣY:Jڶܒn)zJi˓bݦMUiN,;Yvf[1EHz]02f^e@tlOQyqimUaۊ˦vS7YM;f;mJqf>V: {)0 b4bA2_lJv>/` ;͌M; 3TZUyA;^SO7NG˹M;g? \g{wwbXYLWQ3ɔ<.U}c]}G+=8k!ƩmT{gE-sG;l,D;>2]0>gutD)"Oo3Z߾\oBxDR#%,!8HɒX坅hz,ą8yp'(wR#m\3G8x3Ǯ&ujXR5a˥Uh9ZuإJ]JV8oZu52IjɳSjtPҬ?8z}4]X+G\#'}K]%;7l>rXW>&Jv0[Xw_JhӾu֪SSg*M WlkdѳK+q(p*8ee^yK瑆WɉDW.OTuR}k$HT/nݚ g8„޹>W~rrpJfAd5 Fõs!!J#{mlRdezd3زBp=P4S\%+&(AꁣۙY!>j+>@,Z֒e=^JYA4RDiV0(L_7yON0Xnɹ'XmU-\'uԒ<p7^btJ݌r=,PKy6@7^s I\"DӪ(L,~&8ܫ2^0 q9paykdxD8>R\J4bDqkd B׿6obܔqQ4}lۛ8uf~|߾nzpoӫ_oUF8n.<݇ON7Jo  1!G^9詶PQ˥Hz8% j pF(]]~r3TMxlQf~b}v%}?*wMViH(a[(x k{j|]pk+Ɔ0 "~`!"N2`cA~b`4"h<,>Y}Ipq\"U o3N+Q0JHfyA+y{ܺnٻIy2K?R/Y+E]'~'UtJ)aF3F!* kp6JMO<&@Q|{;JoR{#q1D* rV; QR JYhF{ek|LRY cA h prT@,CDɩ*(g%ԯ,;p"$Y)`gOJ"9)3FxN\ZgJW'A=Jp3{P}tU*i烠3 >|2$ȽS%E L"[~z$rB')?_| zs[粁r vx/^k+_Beܣˁ >%7| Y urSry{vdxzTzj14rwN 8% ޖWd >fp X7}PUO' ]}.*rՇ] PT | Pa@U'LQSs> إ|z}&k-ЇM9{^a='`fp3se0 Հuo#:rȡ:(JOFS=\c# X&1kj_u@,EY?Q@O~8X *J/. }0psOu X #&s2P' 3È0H@a^{dxJ 4tB² ɕ. `_ur{M! BÀ cU>;DZ,XBUYJ(i.Yq XS襅c|z * @-qIfLhDy0y˕V^zMC#JFx3 {H683A5 z"kFQN%4׌$IzܭBQra9"B1jdӅpqoE`gf|C,4kmS;W-չ5_y_&߭hc 6=z_nnY|~,W)z9ONsFo=Z;EٳwO9NCaccy2֕͒Zz֢Km}t˵#!\DdndM떊AIm^Y9h-yU[E4E ۳n`떊AImކg5gݒZպ!!\Ddtݭ}1uKŠ褶quko2rZ(ۺ%uCB^&T)K{֍H`R1":mc;RVn~EEVrM)DO -*9֭Px-֭ y""SΟYNnT6αn-FJNK3njꐐW.)2 |;u T6αn-FT1JU[@Z:$䕋hLq(=&AImf䥀-yU[E4INC)#?z5^zf^Q}` )Zdy(#j.Մ6$%>꣏jBBG*5jB\Ы4Q !!ԥZ<@?O!՟"ҟuj]]M1GZV&p@~ٲ SQz]s,/yp`'D? ׅf8D7McͷJ7m' Vi€jf[Qvk ~vk.hK 3d4$.}@b+J &[e!cO)\4*غ~6_~j0S{eit;OE-ir9(Qh n0X?n?n;C߀?|+;[v<P3˅R3ۯ7I ˒ A,>{ D)f@%jTxL09񄎴j$*3B{/x3*y`韸ͽb d9gaaJc& %%юim`mw7cy 6Sc}f4h |8ǝ gF7 Oh8A >.-f&Zgo!!$ z0v  o=OQubi|ߖNwn CV_& ^Yb-d7 G+^LvTAGUwwwjj1@#JDeO[$H<+łϯBoß1KR+åK!FOr/^tr@oE@F.̗HK!q 븏@#3Bm `pEXw% b_Z قL I)AV Be<1@ JN03/EruzE\*vʚ$#$BRoO =`CV9mFM6NK@U{SLr9f;Ui5yC:.?C^ϓPuūWonɬehj45OUI, SIz/]iP&֍Rn9igyr㋛0pE9^U2#ON* Mg+ft麛fq]e<5"Or!s>7i胲s3 p;H5Zo&][j>`JaWhۍt~xqhu"q?S锿JC56$=<,?2}T.zGir jzY2vlnV'M\ՂqGp=.4kb(Yl2h@4Ƣ4e)ro0ت'Ā{!BmgG?X/?/1pב?qg>v;DK!rG?\BaْA&g$gYmלAI:58nrMWOFF| 2 SIt )%ހN׼֌ÀM^wuR=|:nTnh7WU{Ȋ_6կxrfOΑԶS_J㎋Z-jASy(԰rHǠ9j">F-EpgA".im; XL{-Yik祣,5QNb/+,GmRm"ʅ&l^jLt":Tb%,EgmeNZwU˅qxڏ/yk{;Y<}Ohds(ݍl5e8Ct&bґM >:Uβ!br}elH-u?3#&SxޱPSVi_61Ƿe&|A7cMtT? #^a``Eg@"Jfz,rVMS*UM?xUJZJF*$_17/<qL/*̮߽@].Q 5%ԺUS(PiywZھr>|Oo\;quwqJ:z":+ (Wvm'b=v"ZPDPD1Pr7ܤv[ >FJTW Ht6Ϲ5NIoU'GsP+;9$Wb2AqNe3e(Bf%[؉c%>PWbi{ՠIqiBN! j%a%~8VǂǂXP,>Tjw~>dv9t&%(RKȑg&Yf<\K>ZKvI,:X`,+x,|qt,|,8#ۻ[ R Ad&^`9X9%L,%dLo#|Uq'aa|=b-.4Զ7_7_yk g)gO[  .VS*QIZ .ӌE(Vz{x o?1W+h <~Gl|ۉvF=>j|(;HW>1QJzQTieI S,tI=j=cV8ba%jc:}~;UPa8%G*yx7Z[H*PD5)&y q-Y/4:0ܠʌsFYZdsv!-ɀJQ1I1)*3 j+7 }8$8PIP@<m, @}k+V%+8>T- Sp%hzƩݧp5aCv9|N-t<(T1,T=M IBHiK=1srz*Noc닾. `gWLS5m/;j/f{_F yW}.5U1 2 8-sv8 .L;54ьVBr~A>l1x'FP_q65T Iu:~R?SUL8BdNgaOQx͔Ef3B!iAX'})lD}cEjח/a2eG(]"U}x h IUcS&qMI!IU&t63P E%F fMZnt 'qLiؓ1%IiAЬnX+)QˤFNZF,'3q*(2ęU$cIUmCA3_ 5^|r 6:@r "#„ ZzDRblNծeW4:NÌ3BK~ˏ3l\hwV9γ -.H]Lv- ĆۊFicEc-UtC|{/J8bCu\{AE1g"0F".NtJPJ;iQIČG+XӍgRe^la& SUsEUcKn1cE9#r->.-T³Fg~s>vRLܒv]"Z̘Qu0dgzE;;!EK#I@М^vK{3A48>V'z I%# LȞa:6(wy؋*%đ:T2Ԉ: +'-!J(:l߄"G!>B|8qSQ"y6,rZ[jB^u\Ek嵵UΫr$(ZQ XD6x Gzd8q#iI"Ut%w\IW$W^0 uw %G BBJS͢<نxuS{SMచK[XNfvO~ZY?Lfj<`R)%0?UoKGHA:"!h68k㦹r%9RP"7ҋ\nP~)TJ{W2ZH$$Rߛih5)?B^NSIh3rxXdC$ǜ[)AK%Oؿ2 6wv,{E1Jqyn6k#~12|zs^* q0[ Oњi훆S Lv(3j$#h'{;yZF_g>e w!b>3| zMmOKFuc1Э[ ޟq&b @:~8k{$l ?xB|!=U0rvXszǽ7775ٷ˫;[rgG>yo?nsD(H^<2xD,E P) ?|<@pAt~riED#W=-U }Gt6W8l_I׎'ɫKZWNIv/_I7L6Y|}_&Lms!z_WT*'6ρo/UWPhu2☇*+Ye9S!ԿWxL94b50-0S7%00#3(:lgw<"}i ~>YZ:2˻q=#!qoNf^IOÓŒZZn]k{=!ozyuysag-`|o5i/Pn:;"_?S;߭%!hX?Nn/Ensgu{}5E煴e Ge&Y!Ϭ+E]kgG5ըo jve;dZ.uZNb 1y 1B0z|6mI샇'<޴zibS&pQL1꤫[a50&H_?NqN|ýCd;=L=4͂5ǁ5ǁ5ǁ5Ml:bJBH](T)HIBA yA[;nK鍫Yc4\ԁrR_w+)$G'Fu%"3Xk !&rE΁uFB[)[1Ye1PUTPVstZ@4[ &GL6ZE &SRh(ˤF\^ #DFZiԠ#$e?zeaZ.UYv7F22 יRn즥pkY7 BYzO[U0\t~bT5ڕMӇ!&&Zgg^3?Bu-uS{yu?VW*}wq94_:kTL#|$C0V({ف %V$qpd~I `w#$9o^mS}yugϽ>{qjg/]}Cӊ -L-q8{pOn-(λ%*uKlI%3JLg?X X1}O% \qD;Rx9a%­_heY# FJ+ uԢѵYG]1b,\~LuRSLn/kEɛ 3W۟ΧwB<(Uڿ:•/jX8T/Ud|<\_ ueKwխy}?;(k):޹lZ꿺=J'%\-$u{SK;k@9w!oExJ$m([* bX'u6m@L79p9tK{n]pg$RO UK!n4Rc:Њ@E0-N٭y,?O)FYůw?<>X`ҽ##uDUȴE(N[rbY$2 Ee&OR J.xE29\*h "KIԓ-NPBB0~v=ڤ&h|14tHP\>D+m61l-JWx'F:Q<@%18/9YFwHOT [:RA[%ĀKTۃ}]#sIGZáؗ_~BKpҗՊȰ|AAp},?ּVBs)4G]ZWrB\p䘗#ybKnH3"@%^<\B$8C KIs]b 8fO$Hu])$N!1hvM!>؄; ,{EIOv[r 2K`eѹ7/L]͡,NS WمMM:e~<{,{?ln3o:l}qudU}X.׿T&L[sckR xskrf^2E\km0?4H$e#3<ٲ- ؎} Mpw ?~(wq~z-H#񖌓tfˊnQ<⁑4 O'{8% rک"n=lIP A[2eC+W-(R2 ̐[+ w\.ԥRmVsI9ns?dcB,(1̌32R(*.I$e]c]&KC-& wUgДʌ`Н#q0oIe5hwUr2QmMpذ{Xypge g;^$eGAlX3I`{6?By}HWPuy˨&zHw/2Raڙ-%]hӼD,>OӤ?@!xBD717_ÇE-_:@V43@^"ǃ]]mNjuNO,F\ˮԆ=)O.d<&;TZ(C\1̦]2U;X}ٻ?b\gf3׀y@%&ګsUt%3,sziz5JU%~oB*,ᣟe<gʦSY9Ad]"_8~g"tV!Mn,#P@*bt"NVZc.ٴ^E'"8Dg9`_Rzk)8 V YϬ- F R|D$Fa"|0*rQI1 ҦsGFXfr(E$4P*F^z۵"]NBjwU?#bJDcJTQg@]ҔNM1wEiҳ hՂ$̊ W̸_VP:`SU\W*1Tlz 9!H y5{Ne.1YV|?lX97mr%L/.kB3fvwhb %t7WW0TjpIXQ``8XE'؇1Wnq nJlhϼUȫ[m 3x& X9H|YxX~i嫉<-0yӱRkEjn[dwUo#:~%yĴI?GVD7efqoI +XcUq<g3m9C M)kZ*K1]L}Z_֜gqFr6kqAױ9]me]_-HYbw>wsI STCZGDe\ϝe)s?x>]]B/[M^X2q  Z-׊x rͯUdb͗R(_ߜ-"`G:h෢6HWz}PDOgץ@9"*fҵ\JL')eSexmw,a[q}:d`UmJؿ|O`T6hx4|'?M;F.HS}z?ut93vEZEՂ unק[%rH<ӝ'ĝ\BTB˰Mܼr8nOHB)*JWB?RF D!5x[lG8=)!CT^{mnCkٻ9$l-mw8w b׍\S]>ui9.7Xos> Tk/KVBf_^gfu;^|ᢳRoIU}7?ٽJzoNS=v.LMYG|#8LVoLbcݲ` ɖf2$KFD۲|[At-5y@Hq2r1&#3/p9iZMΎ|T~q"^ =s})[(:ְW;y%\Fx5O4!W¿.ӀlNٞ>Lڻ7[l6_9 ԆHrr\=!+yIH"DOAڰy_„IζE\7@ayLaiȈ1FKH6X v6z=܌n @#}u)~Z[R@x&8A~5!$ A:cV׵^X oR7[=M{4ii]{H8^}3=33N8EJ6A$!Tc#TFi7CeQկvU{Qm K(R+ܘ`1Cq1 "hSY$|p!{8s40a}Jqu+VA\iFtT6*e4|:%; 仓[q,X6&n69Ap8 jG*Gɍ*il 5=۵[;^%黣bwK.?VL!(Θȭ \vҁ!XS,ceKudffr-0{` *}:m(Q^IV?` j= "z_ͨBbeյ>KW쩲z2uvK56ch,DP,M^HIɃ I{S -cJ1,{++y|6BDgmqy&}B{xB݄7Z24Z9`@~~,aftgy}z˸7Z'VUA+|u+UݛI cEm7mO.dFD5P%;7aM7֫4g<,Fso><$-D~ir vbZڞ{Ɨ%mJ_|v{P4P 'k:j҆!3J :BHaGC%e0e>=FlzI#/O]2ZL=D(Z##"-B4zrL=HSk9 V vӑzG5~:,zF`ITA;F0I`xtX1335ԃ yQ:eZ>׹wI5gxAOyz+UC:S~[y>fٵ[A(bߧo">7~Z$Hc1yn8o,q:>(}Ǻ)ʻ4y@ų zeɝ _`Yu~>}yHpܕ/{ڳ.5Q.[ҐW%:%z>uH`J1Qw4n@VkVhukCC^V>|hX[)9S&퐫f݊͵nmh+WѽuJbNf~.畝I_z0]WVQ@k8?}1=Z('d޽dVz$FpOVO5l?ϧEZ`e22_v_Y/G >f0QGw(Iju+ڛ|Sv|֠"bL=&Xl!puTkU#ku$uhONZ,=Sa|?zr˿O r66WۿspuOLSHVx\Ą~2'^YW:]NA pgJ5C V,V1uĨpsQ;,Gk-4"EbbjEE֕^3^k*MYWlT m|g*R[ ̧\%ȤG)X!ݤgp,Crdj,&@ZF.b$Gb=՜3"L8*2`"RFXX ` 80nZ |(`*N &IꅞZ;zܶ3Tlb&KJbBD11(ovnSa`lBZIIT˕okT>mNlZD|8IdS{#*6ޭڻ:'/cml&h;v*^FHp!T%"xyO7k&H}5/hjDmp7cI3験^^]=x f5qf/}y{M~SR6' `nw7N)7lViz f *ʡ _Y-X^3%d@wofw>$*P,õ}0 pM<0 EM,bʹߍ+[7ZAa "!+].eAͯ>L m>؍Lo{eҊPmWT齶gc~kI en7*b2"ṭ 1IپZưV^sB̾ʶ5$*ǐ:F%r_s茶*SLX2!Dms~ml= Za*WX ^xeA9jBLhf0O|̩f"MVe#VF 0 )F}-Q!͇w:+s5A{m3HppKҁ-ν[we|ʋ=M|_g飏3E1K*B?=:(ѻi&W_A 'f%+aZ>] /N0]OsRo$8_/TU$>7.ZzmLP.9y/Я5_FcϷn箄}}'-c! #)o2yMA'-g4FD3TrG4hRA^bK812G*e] x (QW:̫Z6%knYt7K)b Rwp\!&+ =Y5boG 3a8MwOw!VFrF4MڛI%!:n[]YДJ8qJwsWWqc1zojrT0cw˫!U#Y4`1bZTero^".;fhbɄ#c]ٔBA']TeCPC`zN|5JP޼p}ZN1n~g˃"w0f5F]BYVVTZ kZ .Zram~WCq2-*HG}Y^.RáxvL *8)Lُ?e?k`&a$n‡-IB^}{:{yyl]zBMk8TnZiٍ1\SdgW`܎:+~Kmf}1X 0~%.whJQbԾ CG=e;n^[nO7ܾ cr{ІDsw@[]ِ3RMoEN6e*>HP{=oBA"ʽ`(t)A}PRLJV-M$&t%:ܡdUBP1_a`r=d_eY*7Bx7x˅K7 M-;) 3:aV/VY"3kah#1FQʩzH/l<1н{Lۜ[S5xa=r0r͸f!/>^H9I=S3cpj;#֠mFiOLJ8<&F|c,Օ&T4R>XΘ>'o+qx]ҏ+f8RaTSc0[;#Jk"hx]I sґtz %3z;p{2zWeK} .\l=o:@kKء%Ar7oa^~Bc \#?~q00EaL Fr>(3Z4R\|s%C `N*:ȥjnPvo# EnVOD0gW_AhGd:BԳʁd(*QlסR33綇cK3>FKF!ZC#;V}!8d'/:"UQ[%+A,Zz^Kl=EG8H/~M۬`W? p,ewWS2(aI~AF@IĔ3DƩlO\\Մ%1ik)YHtYHi,CZ1N /p/JO6qb{hCpOhaKt3tȞ (t 8J3FYGs] v j&&O;[έCsN%O'z:S3twnmreHNH7muPDBLH.v̓֩<n 8{ÿ\x9ºbyU\^Hm}zzk *O9i-9 *la oӖw_|/u!ċ8Ջk{ҌSĀ!(@ J\y0:Xj5A:)yɨZy>z7eKD(2 K4Z WMq²zoܻԇBwɹ'DOfޟ/u±-m\~?ON< Za*W ^xeAqTIj/7_cOA(*e5+ƳSJ "2TH}S)AebϠg'o5~_f /q|&ZfJ6ܪΘ!%̺Z=D@`GŸrxX[+cY &:PIT.= 2th6~(4K2_M糹4}(DքrbcjZ[3[ezJ:cux ,V8Kv 7\%+*Fbr%Tg3sƍ/TZuXO` +s|5ϳzWP}KjK\;#t2rzu ʥ2b'~`S5g&2;ID}Amp P"!J,j55 GЎ.WLOi=VtF8DjD|M9ެ@gWKp5D.ڲ0P2YƜ-ƠH;8p Wã.ێ6ˀ(fĻh|t&oG< :E%c=@$=.NE,i4uEU-(Z:<O=+J ,^ɍ2ղ R`825wT&`H7H\9tuj_eJђk;bTqs{,G|Hek(a!#˴\+J$yAډ[vU袱m{^Gb D/2<[hPXmE)zۚ[elp2w$4 GT0 #-  ,A$" #EG觲^'/ӢQ:ơ.vx&=?{_v}z?:?MgĽQJӜi`yH у;QeV>2}߀} E"rC 6 OFPFWϛ*@͊^㘬ey"a0d1{j'6s+pQ*1k?~9a$0eW߿#wxL9b`֦oXlw"]p{x|~d:K|0~#ܔ=zpJXaw8cPXm'zxvQpCie "&Į; /<ی1A%BT9.ERS2wDң "Lci~ ]9#a&ZB#ZAVF2#HHi9\B50&5.>RC0g2C܊W~RNP~aIVJp*K&:p,KQ)T%cj$"HTn{-l)\ H#CLAc*sZF i%`YF )!1)A7c: RHJ͹}[9`Aԟ#կ(Ѓ˷"R T+k1 ޲6cLޘ#\AQegg>X|Z u<%O|W רMH;FT& SH nlb]lTRfBBH /(uBX=d`0 T:Tn8&c%e(G20QNH;!V,%( ejOD|jEDz9|v̤"O4'fB;s5˙= H+rqR =pw4$ѻ;璴fBgAp怡7Ɉ珄7fI%bm,xϪ)mQ|b"۫K̄6hĵ p<X VeGYF\vvJh$fh7ĪFT2W PU𲲧KbGŮy+L2ړ[tɑ/XMpf'Gw{dGXEInhh z}%)oh@"&7kzrQa4~ꃧ+,ldzl,VE5X{.@x3f[<Х@W;бCM.t=JՍFȚk>u}F/;`ōZx9 DXm|8?هtsDY|ې(Zlz:G @\~wy!ˈ[Z.ZG[D$Qq @$T_'R1j*1 `76\Ua<[;r.ݏmwi1ޗ R^ؔkҊ%AeXLC<;ѼP:`l܂sx+ٚ4ĸ&8LGħ3Ҁ[f8q famF!A(,ٖt *%ys2MKA@=aaƖv!$miWWUey nX~DK#X(|K{Djcc "j}(2BT:/Bed`6TJ}U|B4ʄ`e0Rbi1O_[ߴ<-}!5xvvF+hAyɆ%][# 7 *!LSiΫ>#{98"o? |i3 `uj&&/i)gدy _jz DRZջL`> ]T VHYQ5CIk)nLuӹf*}x7'NE]%1I1NyɥFxT œ9 qc=쥁qs>}^bxas$w[56'w\[Bͱeƒz2bH|i I1=1S.U;jK]m*:bH|Y %(;\aM[Z,e/ZT sQrP jylں/ǤDO!x8_<ߚ ,~EyxXm9"w=Nqgd6[A&rT1SmЁP0lt"9c.*@]hzi2ۭN4 ΟV+BUF$ZgOoP֖Tę<&`/p&zp2#gQjwM5ߕ ĎPэ㻟z 5yN/q8 ՠy+ܸ`.Ȯ@֊/*a$"k݊f`)bbh!Xh`eՌ6t+A ki%(U]f,`]ƴ}H#:7X5oU7Q\U!컈nio9瘜Ox BUK{bBNmvC.H[s}OvCIۛ v+ą*ޫ[ D]N:@vF,ĉ=gڋ;SqFpZQUK;׾9[VznTu0x+h 5dmazwϘ#[պo9nQW/>d/W LYkVL6txyR -53''Vt&93~F8h׊ZF(\#i+:}QT?׷tDƑ&-^M;օ*",H!Tj\)$B)4M?n= .~vgm!|~[x官>,O=, |Fk dGJh5QEFP*!q$#m; Ċ;zu=K\ ہ5CMƏߝ&zq9dV]-nsl&7IK 7ɻ\BGbD]`}APY 1FD"2!C؀ ` DRS8 qY=)Ȭfjk{X%w㤧 g"3&p?·CM+k8֔k9G=B'*YKl3d;y(Hw8YOf򻝍&7>L*Ev9?8;qI5RuܬD|uw5`-aߕGkmVEK6 l2Y,͋ė$g2 ߷ؒFۘȚQı-UbJ/W!՜ܜ :0CԀiM0ڸ_(RLOYJMoo/% F (7!D_i%[ .TǑŋ㼠]߽]) 3z_CD RyfF!JO} ?m s죽]4`4Ao /%duBq! fWz}c Tz&s.}OTWؽx)÷W[\Z4Q *\tXd9jOբoZ}h;E[F8h ʘkHBv2O?VR M'7btԮjV8ЋP\% 5Z*TwNM^}`^0y ˅н|++`^!7o&s?lșg 9>|!zN7CKpmUWmUdtVPf^R_o}np ~GoݠUuM;8+x-?vX2Dg2D.n&;L4eu7ۛ-D/,9"'~ ZO5Sխt([fOn@iHwi&0_{M5ddt*$koS.[q-xb1xr^M߻E[Wkz;ߧΠ?7s{zR-^?[W۷¬ mXPtOaVTte7>mŁ2ߧׂ?W֯@WBz0A;HBrm%SNj7]{~B֖9v&턾 ej.$+V2UUZ ֖9v&g4Tڿn]HW.[2e2eʌˈE?WF,ge5yg|lC)ZU(".eT'=AhxD9h9D ,?'d#Ue[=k?;H1X۱3CxX䁫OΏfLJVDKI!YgESr"R; \VBP1 #p{PYR-F 9~0y>)]75qQвQⲾKY3$szGǪ`Tb\Oyo6* bæK(!?݂zFR"F TAWi-13.:ib|ChNWϩtYH&5h灑ԼǂU< 5'W3WuËA JPg@UOE_Åd_L?Wfm&6z%y#[4/@(0%RyDAtp!)ш4cm|^(_8 HΕ 0.#x]ќ 2j(v⎩TXv숝HE%十xZY- 1l 8éL|fg7czvvko [fc8=Xؔ[J] y0JhSRbbи)-|[#0akmaLgbflXj'7n=ONI6@&VfhF) S QՉ:zkL52T˦4,,gER>|Hrގj4I}QՊWՊdtT@GUBaU: Ӽuݢfe- ӠLZL=-]hMzΧGޚqjO4?[M}3[G9fk%F{h;3A1SLJ*ykpNxt~޾OZ<4-2+߆ھs=̯=9?߷dS0!{_ Km Z?9*tbg?I :65x܂^I}}˒7i^8;vVlz#>[5;^?ș,О.zfg Yn^-9 ^QLϿ柄I!HQx}@ +"'AXE_dg-3|0u}x@Nyh|D:wPFۨ^0T1iތgNHB!j $0iXBK H@/ 8;I LVʱ/g!1bjg*Ke,)\EdEQ6)gsaSP%PfF<-Eh6ڬTNF cKq{Y:Tuڬ0)Ag[1RufL&'  md}c%gf[g[r5 !=J|:qU(~~ÛEބ\)jaeXL qh~f?ZpݟAd)n+ByK \P6uwfu EA>e|gU (ed݃(`R+^i[Q&Ӟ4;a4ԊcLiW f F'*P#/̱}>$wtY'5ng/cN^LU:.q<^ .7W0Fc8SsȚ\es`sHȡ./Qw kz3XIiaCiL >u=gZ]6qp< Ӿ.Nؑ7}UU%˯՛8 9JfRԌ–%ho]#cB5rfMB#7}a?_Uq>_:`Byvc^u:`=7,Uw/[g]xg8};;J|vx_`܍_xyS.@ˀTʠ%*BKfe^ @=IiB'/Li ঘ܏v_AD>H~-Ƃ*9lop8Ǒ axn:}^v=EU{:Ϲ`U9g>{)V4e wstq:cِs$E^`^V_eaSԩ;WߡY=%cxwLe053 ?cęxm~~L? YR'COy~s?Ff|XngѺp>oH8.@7@Ew7ňY~A.;BkFS?jS:_&(by\g78\sߥ:b e-'[+p95Bmz5i$K) @ XOɺ4kh&蚌R*ͿF?,PpGh3%&j#Igk}MQއ(U9TN&SITnT䩈<b,ZMn4qo \I6Jj-e$~JBuH\I:j;(q}@|FUG 77j.&0ߍ&G d]EVdɮ:I^sU;vDl6t)[Эfiք@쏓Oz2}F+⥜TH *'̚iG.PtD2th>aYBR%C#yNѭ,D14h=JX Pr.yp*u|-Lj̘>G'ßOL]{_lvֻ X(Y?Rtc*54>}4J 4B其{ C%Z2۲g*EsL shK4hh-.D@jdEe%kZ]hB*D!RNatDH;P%Qj>TNJ`e+(cDP e e񷴨r][]c_ְ(J-_0ʱD# %5ؤ7qwLFBo9'>@Ro``( >h,/FptщuNâF]$tznIM2lOejT}}{rq53*Z4M7\S}` A$c Gu x+ oab(?W0BkR8hSjChu!ax/ 9y kCKڶ@`0dÊu5fcN2w5j hn`;SΟ FwJS`{(k5 I"C.8md—QvϘ,Kob/nR?.:8DG+kn#٢>'=ѱ>_zCWYjScb"X,AJ"KY$@"TC71EqRm8!|ixljk!C-6$sq$Q K$H-,#é &'H+DBZF#DHXnIeF .)K^Mj KOjQpBAi˓Z"qs-Ldhflp e)NPA"SU\F11f5P`DfJXcp_3ȝP Bs@@2k0~S1D΂k\Hr6v/(p\D{[g( F[_yl+$ŶO ,.}$b @EQ-S_d;ҧq!JSoc<8ncDw)6qv S`4ҨH4~^8N9CW,H=KNvI>|;WyMKXHhU#^_n_r'߽ve7Yq8ި㉤r*sکUIqO`">7M%n]>6a:d{ǣG00ȦU+R-@7Op&+wEaTľȺѬY?|<;Ugt/+DOLc?fZ?:SD"mqs'3B76]h4O,hUR>!)J\Ap`+Ts2̩L)ēXՕxǢr*Ȕmlg&#΄nBl XhDG_lp"x`FqfRt2%AA9cԈ\3XK\cz&t6lCq$qK|NaOn\4f> yƩwih,MT{ B+弳T{NS..ASșƂF!+~ fnnʇi}p6jfW$=Au:v1Xd.قJ{~iepšu>W|E|'4YFwŽqcw!f!~4k(Ŵįais"/A|+]X UG,]ɚ{RZ̕zYrYB,zQdFhQ`ɮJ?| >̈7#aa0}o4.f`ߌ ϨyfCFgcq,b"1< h{1E"e 41()׏m%Jaiff Ν)׏DZq!trnf} 54&簈x~LaO3JD^22q\42Y7>lₘ`B)JzrsKE^$?" PfPۏk!h0ࣉD>Ex/!@<5dq^B*Ѕ\.^ѯ,ݦݫQ3AaQZRS(4`J=TD[>X%;E n:&~Idk`P2 m %O\Ug9Oas;qʐ3$ZF$iyP}=+-/`1sۙ,]{l%M YҤweI\z j\+N\hwE`ئ?Sp[\[h8 %擝qi.ڶ&.ǝ[xN3<$&B 9\yh\kAn 5y tNFC[U$%{-=6+ P dZG%y&(xͷA8*X4G;OBtctG*. ;Gi -(Y׷dReOY ݱ\{c( nem  )dޚNVD{6jsW9Ku"zRNH'2lu2 _gkhO<͋y{ƭVy٧s;<1~"K8Q1C7H(]Lto;:j[|J};"S,_c㦵/-&Hg%Sϱ`t֪"&ֆ|Iҋ֑UV f>{=}*:#E:ėpj98(t _:t+L:GCzGiXѕ=[V-Գ(h/eŚs-[pԱƃf ݉༬0]X !4aN8cLam3i(aD1b[HK)//uqɽŹX0NA Jw-w~Nwĺ'(ǩ;:reˢYQW˶eoIv祱5I'V3m kKZ - ;%m !J|i3nZT(e}z=RvGjD(GKP?{u&J-b%[˄:bT 5+ RiK8H/3#z2Sь3%) Ll;v.σ[Z#S^aln_sZ$I0=:|mB\h-:Jh I-gĤDZ?evUHKvՂfvp6JSN4}UI3!iHtƞ5t`cE`}6Sw=?G<Q\9LOfa ҳð ~]bj*bj71#4F`x<R [$(Q rVKf\D;-rs!7?ozJ I+齺/zӛ6[*_2MMJ7x/':&hlt7Q*h*z ô0FIu4 "5T 29hw8es5 $4:]EԻ#L4VI& ̔ ʔ|u*R\>{,*)R uuU`rSXAbάɹ>w(^gT`IKuuQAv匬@ xL+/W`(~r*AjI A(04@ GIB H2!qP@9^{$& [SjK:/1R*8߹W<ߧ˄bvMEM @\ FO޿[|W'I1[@=~-8bOYo`bP_Opdq?> 'm9iM7x׽˿Lr f2nz1!d9FspwcEƶGHo i?sx?9Gw۠R52v+>#]10TA[uqY.Bps.j -٢J#юUt+em($*Hdq|ހ2.6(CsnYb F{ϹǚuDkB ʑsk9k㤡JZoר;G(%1"%a0+>CUJ\iTM?qIMᷱ9ja"x'ōN0H6\SiksYISBHw >^™nqA-dZ*Z'XJ? .j}]#fFW]aC N$%mB|B{[alveYc[++KCi3e{.8հ{7auďV)stv:dLx]w,Z~"uyQ蒤R޺_RWJ2u'q^Ig9K9i$x1ށ \oܤI4K\rA_RPWw/E:i_ unef!cb^C IjC 5)-V'x')U'b,|.u2.> 07ZD>0$+2'}:*Zj/m[(3ݒ=c `A ˔2N$K]o#WpI-Kn|sF_ٝ _%ۭVw1XxUXUG}c/Zvə nܣ8U>]!6l+~ "`}:NwvO@W9]͐>VKe:\Hjp mJ J ^=5(TjnxwrJ]z+=LLTg:V>\\Ts6(ۮY\<~.mYɥP]j<}uZQg1Ff&T5L7:@hP-_x jX-\)gKZFWmK? ! \ V7lJs6†iy7I!y?\SÝ|p.jyӚC:orm@{'ACD.V=OSG,R u IX٩w#ur4"#œCVWz^ηWS]@9E$1.dAaQtƵ gܚyʵZ]{'p +\@SE K(|ux^G/p` ^nU0-xnEYn_6#um8Y*>oSD}2cA(ճG;Z3r &,g.9~ -h<}$qx,@:$)ekSG1"Z!&p5}_8h:%{y|VdY:Iu#K`DˑwyBJ &ˣfT!O:р@΢O-b,j=wq\T[[V-_1PtqkPjqZqKO^BHsq>Awq`KahIcg% ^gך O|_~ZF8`1 9*QF#51NM+jkj[5c%ah#X%6Yɂ'yL'Fhnjɨ]n7\'OSX g|&|ag^0CN'$e"vIYL1Zl TDIk 'h3/FnFA.Ң̢€|iKTeus ޗOՔ.(J*(N9Ema6wV_&El5Ԇ H04n6d{." @H9TLr eM ƄtZ&K=xCɑB(" "a=B-l7jQSƴJFƈA-1 =~ !̧òcHQ2tD]yW0Sn]I"{:/{}<;6(\^؜o6π]_NI9 rK!#.NA'p\7 yдab}̋6&BW|@z;k (݂q\@ZH؞4V=.df'H%CB_[pD<;?5Aex q|*}1쉬(]$nwwg?)_ӻsԈO}%ګ_.alS7X|"&/6f=_1MCwk2e qCfc껳GU^"M5Ϲ:O[a O )1.HD SH 3RZ=Zʼ,}@=h\nTHf :C pDnF=GAE~\z&# PFǧ2H g?܆@SJ?B_(0]y-#x$:4k lGD/!J48E>(ͼJ·BQJzUȷ,_djW}]ɚ@U-HN-^Vr1#ٛ?!Ex( QGKu5$AH 4P+"#$E Qkt$ e/J`l$&Iq,!*x"E+ ;# A,2^aoW!6/ͻuۑ_)lR'^WNOmJhӝ*pIY銹jZ9C|r' V71e,k%/R2.IY1%{6sQZRP #*Px8pʧf8խǪ7/D $Q5`7\ipQFcX.zڹ(x$NJԐ T!jr*OH=_NY!옿_MB=ecobbrNDk 0KYb]„CÌ"vuCe549 z\s²vls%T'mE#NArҳ^zV7PhQ0KtpOY)^|zit`pF2S\P?u %))fH. Z),IWWp˕ap[Rr$~O'Kr~2^z֏R߱ה) v" ע}^рf cG넿 s6" 1$FAԌkc #;6RDҙȁH0R&A-2a1ItH+U@G=_+D [5y 5V^-<DQ_Er<0BR+FK#P@:<!7\7 y[c+F od9T|ˆyNFLuxԨr5ʳڣ k{MsIa7ZF7eA%H$4Mz݄J(%R(F粝8y8 (/9yf% u|J%OBwePJL=eA_FYk|s}Oewͯ8 j% 7_o[=0|}>ezmÿ ps^=|>Z~98(DE토MSC/%)qk,y1B[e1kJ37<:2m{\"tvv׷_Ӭ [:L?,_|tu9u p0|*'wg;?+(%>;s;,to#jb/15[ܓx?sY}/6Wyk 'oϧ],fh%?3Re5QfT>Ui/pgܓJϤpw!oE)I4jC=!XcO?)fnܨr#w9&-TMUy> n'w_߳~9pƓw*wh;;j'Wcw>yF 9{ 3x.Ge[iͷf kO5z x#`Tv\qu@krHF"RZ/B؂$JZpQZ 8,N [$'G)Ҩ=RDXF`.к-a.k8%O!XWӈ:jd!ڢ62H/:^? 3s" ~Qx3=% ͟y ';V r<ZF0P;DkZSvّ9󦾵ΫLZhUm_C\2񭯦E#:RT0MN᎓\DwmEFV9~ f{V7[y*0xkpzT5z[AE7bvmUo̠դ'z9/~zumA?E7\|>Jvߤ=h  *hS?o]˳<[j:Hi膛'=\]\@vY"* Jj>gb7ky9?KMoS7wU(&I"M05U{zi'ޕ3W9$@xMEnzuhdS-[vK-GkcΌwVѵjA3L>پd5*Kei&Z~'czax$ i"V!&412aɽ1^l{+"BGfqdl5ʓ:s3BUk! ]6`iz+aO73_TVQVZ\QHik=rYoEi3'#o cUpržZc%!TR[p3a8kNpñ-zЇw|8%@w]k%I=C9#&DF3xja铏?*qqO\pOj쵈jvэ{C (5H/'˸;8zLIte&l#?pc.xZhF""RO"xZZIS_GMh*e(ئHt]h,5H4) *mk9=gi671=[]t.gyGY .6ƣԳwI2+8h۴URaSi/kZHTD≽|%^i~}lV/[y)YÎ5HlZ1-rCfy\ wsGaFM᪺ވ4-5*Cn/9瘕|Q)Zo4s4RḪ#Q8@Y,hGyqA)6XXe8 BKXMdc@[ep#qъ~dh ZFeeV'-˘Zr7hod{Cgjh>6ڍ\ě1hbCHO!z`&=po&mZn8 NAqvyM*i(ԑ0D9`Ф2,h+-N׋ϖoG>RV.8Y V@CCY~K0nz@%/byM7QȪt 2!>=6ƖAFul:(Ļ܋dg$n %4QojD{T/ ,^Β Y5&{~-iUD]QAphV' Q6-V4C=>;hd7Q郯V1WhkJHs>uήuJCUT(0T$YMYbMUW-E5eE\jG~գҘ8Q0m{Gk.)S袷3XQS}U9pi C;/z厶)TĤcKqUEp:.n赦T33a T p4zJ8}X}hi;g syQ8UBBW>( .2%#Gxa> 袷e 9494lbN8h8(3>)d^fu:ZJY/>W'z8j;77T_*GNPCƙ9.IYd ffS+ ߄}Jr `"wӌccp@'Z/{` Zx9 t&{|qNDtga͜M҈3Z]v[TH,qFFH o[zyrVk:.&ՐlKuSo BEs9'Lϥ`UtpIn*ǢZGŒ?y?Oy+e .꫽qw̫ܴMo~ٌvlqvkRbgVퟦW4t?#p@tXZsb?u[?k\/B'rp+_;sqm]5lYdߓr0E/Õi﵏^KH0p3A 7xLA7t *eζaNPPehvul>1_HUlE%lf3R?%r';ivylRq6Jz7-%1&8eUFEYyߴ-3plR)>ǻ X+%O/䕷9c@$ ( X&iъOuJYO-q2qY4{L ׺Cv}}A c=ܽFюRfms` x#ha)h#c|ZHB-jTS߲Tf +ue`r /ct)]*%1KTmp6h(sʌkEQNGD & (k^@HA-j XڬgՑMy!6g5.$z R.oRz &֧*FMI.l\F Z̾.8[-.Gmf {er5ܚA$Բ ՔtҾB3*U%/cͤ*\ƀ˵D*me5-i_.7Qg\[/O!P*) (;u#ԚocD<"t`$ךRfVfn2xT_'񟀊8/ȑ1Q#)Y S`ZK uzkT 5 guf[rHyP+q5Ɋx)%S^؂]2/ ةC7i!CwhNWQ#*rVw rB*-@/_Ԉe V`fnX9zy %}׳%ђE&g-]f:p NH$-DƈJ!Gx}z:f]vO1TC-1۰mX#QS)5߳*~h[M'$iҹ] Şb7V݌Mǧxz!ʊ<=1GDm9Yd.n85ކps;f0kQ )s kb-zఔjZ.?x|}鑆C,D#T,RlIsMީ^#M2YMݑzcH+RT8/qUAIUɑ13 G5a44[l+.9RW')L84c ڑSId]g\U+2ߜWoS._~5) k8p+jlVF%Ps;f#a*T vO^Em--Sf .V8d`RѳV6n5U-3"܃<(\rd35Enxd"`r-jt3 h֊hEMI0_DcxZi_Kd3Vܺc~F:PI$x- 5pOI0& [ũPPH3_Z 61&[Wԁ(\VO);Ѿ%o:-k2a- 2? 7G[i.F>'ט)cB|i8wKƻ5JsWU̪I*g\ ~ӵsϞtiΩpp`p~T[t_hS=2068E4dRgA6r܊T6`I!t Uxut Pr|GfSXeqɆeձהWr +%Qgi#X|ݻz?jD7jY V؆=6'f u5*\S a 3ƌˬ 0+R]T J=  A\X^"wLQkfUTp L\ Qu9 Paߢi(MrˀW(lX0%PK-jjJzlVA&,5Ҩu=^BZ@ZۣuI }.oˍruJ,)>J ((?ߖ\ŌOZhAVmv(ƾ7@'Oh^#u2ju?u GբMЌ\F/쐵Qejz\ev79B:Gٸs:'N'+,vK/mT#i4paf&0S; _"C]%R}>4BDË`ih;&%Pr [ail&HaHYs0+@ yfs/M}9츫W@lfgC,13 ќI$ٙ 7>KEJu0Wx2n?4!VXI`wQk'$G *օR >jgH.HcM%BSAq@< 'X0bge(MMjDM#J7`ٽWoZܹ;-`k+L85\i^1ؔ.<8D #; a!%v~]"$+J Dz'QP;ʫ̙"M6U.Ì]ar77D_yyFKƄnc,J&J$EF oB p1G! P}l2gגܷ]SP"τ:c˴bpm 9i,-O%*re-Z- ^*Fl"=Xˉ _ЗLI ^C1[F>\d4Kt:G>i˕ ^azp 1` f%HB"̛527v-ʂscWp%hA؈ČjbGӁaG3ANjܢhw()[sf@.kănSt,: 3)l.bF)4PD22Kl4 Jı2ȣ G(EZ:KcyӁaycDv P^

lp\q >I""v( :V4[Иcc) 4IXA-(G+oYt,MjM%Bji6 gF߰#gl Wܗf9Mjr^ "`Z'a+-ԓvc O_Ȏ;X.G<`bL ܁I^Qd?^ v{wn3ˆyBjKɲ` (؆e B W!` *vxbo0n'35 ҉۽96ſD/zG_ïIv9v^<[Cuw{ˋ^u^h~?=?}aNէ`Қ^ f7߽Lg۹dsY9 npb?o"  &tsyL.ҕ-:r@8KÀΒJ|qC38Iq5L~2}5/C p;`ƿ /Y/7f3"ar`<*ݱ}xfOͻ<"_ŷ둯'3Ixp"-{;:fݟv]r>9~{:9}~m/ǀa!$(#vSsj`Sr׃y|^t28]sZ>:Sg@7G/ǣx+:K'~'M/`K2g39לz6 7ɺ)\ N}>.{]q,>tȹ~.ҧ^ uz#MpBP7lq~|7\>]= /s\\OdQl7WT=Z?4gd_ces ; :0=l5P$/(0Pf JE?6pvg~pz G 0^olr 7[\p+gf\l=|?=)ϴ~Xq[wc6TZ@t[w,` קF7OYjoV 9(cݓ iPiFmŐZ7A֘A`2WںM U+qN_TIމi6G|)_cUE/N kog=b\S5Q#l`#C x8іQN:' EBk.od.ik.o孹53S27s> &Cȍe"SqB#`g\Kztr,+APr`s鷺YRFG2:bFGT9wR*)T:A[Yz6k^\ x뺸nAe{A!>A#EH2jo#41UD#ь9A "2@ a`>k~ ,̫%_sY:o޼>miV!J~늸RFΕI #Q I\" kRrL:d $c'%&CJ]KVÐ_h, g*3ZX1j"hV2/K,)b)(2(dV7EgL&ZsUT"UF&ɒTf+ t&YIQ:{֚,f)5+Ҕ&6ojX!WU*_sI JWfk [),FB_͸ Zf URK2) 4TlGQ*$U + j%[ݧli`ØӌQ7S8X(J4 'QɄH":*dTREH QXS2pXղ6K z9q43]O O' bA,fM! R7I[S5Yvr؃&(cQ!lJ^ٹ%]URQ,URQXT^ج IC(uҬw,eg)#?+f߆9 ,ՖM!e`DfᚗM4/ƕMj.&v-.ڲURb"I[xͪP +͙PJSr7V8J7rqU֪9ϩbBh8!T.fG^TH6_,`rM'ݲқ mA=M=ꩄ=z&-#->\]nZ傴OZJpVnye,v$.J֭= $ɗ 屸*̋#',sEv@Q*`j rpn,l^*|m{ q3*T&zT8\z&Kz*0X-SڛBA3sn~1P5sLy#ڛKvʜ7JNFxck\d-5vUԈHd|n$;+3yC.H LDtcFq! b&iy/3U"Tnk]u21. r]E]O %hB>MuRvyVttI0up bѱ*us>+k&vw Q[1ԢbZ+ N9DJT Ѝq K2L4K]~T}ʕA-Oٖܤѱmyʶ[ž:ʘO L9F+HEٓGdf}dX{%^yJxlQ7iJfQf,@>+#^F'R!ܹ):rtҠYnno_X_/iy] p "M7BTr+ϕ0(;F1\&똁fsa޲ iaqsܞ'Gxoz,L- R  Wsjv=ѳt9YRJ')$l:@ɼ(?cKQՐÍgWn(zS]!Ou7T׻{I:kz}n'P_[:tlhvGgng|/zYTzCs0$4GXB:9 YZdFvȞ1 3I<$ILgR=Tթu>w ѓu_ Tjmc.wEةw(_g~<ʙFe7< s>R,Q)G Sv;2 mUZ=T/ 65Vd)3ϔr)W;SbUL)W1*.WqtnD^~zugƂd@P!#,vC$Imb=B}keO\so͌1̙KstF)'eR!kujW8YI)X!H~ %3 I>$޹INBwН UńcJɷz}pЩrK 2zE6 DpRX&8rA;<5U `]R@Ϥ}Na']7UϤ|&s ӟ~{SRJrr?7Β.&Lj Y|݅ژHs3)A n_w1wqW"}:J5R^s]g`$T-tOu4wBBd&="sPӒPj ӗlyL}M,@Ԏd)e1,RʢOSbJYL)X;%+XBP:hbͻ ֕ǚ3`+%&Y lX|)WX`byԖ>k8~ijo΂v$Vc<9^wh*IRd\I)y}.HZs,]Nwgok1X$G2Τn'un'u;I^YL>߭DZ-/E0y"TD NUߔM%}{5aq{^F29#MRQƤAv2OL  B4 A鰙)ьU;fO54.E}ٿNމ4Zpߟc\3O;C,> x z//˧NΒ~Gc//Z=}pj1ӌނaǿztbI|iMcp_^- ^A[K]/u432ΞՐJ"GVv^ewJJϾ~̯_8Y~aga"ʖ@7R (dD% mR꜂DJNwC|N \=(;[عM+*R7_V=nܪ[/{\ {ŐRD~5 m0# Bt(} AݎSFT^ʋgݩ8T^W8P:OH)9C矺wRSqaM6EE¨qU0,Ԗ٤MJBFx;>hֲv^=_Uh)K70-/qtB _3 .{sLnɖzeo.t9pAEzjxyӋӟxg$vo:攷-xc?--҈/kzV}/ۇG?u}iGO3*ie2XxmVkLNf,zXq[[;q1NvJj{4~=5nWV{f'3x=\oB%]Jj*uH𜋼ʜSpP/`}۞q3R \zp|< %;Z7'U!SM`Ed&IBUҁ Aef&'h|-*e\.! C-"/ἐb]//W װ}<Q3\BELd\p@b$"e-lE/x }LIQ'Q>kLlT 5Hd`M&ajcS"ײXz}<+B5!@= Ɇc`#19FcHJTVmUNbH9dny+:бٳz#B=\wɡ1p3Z)o$(!w:2cMEjqJ^׉52(: *ȲE >g D&m|a^3n Κޠ_nT+DNHj{;QM*rx؃vYP *B'H <^!r@bi fg?yg\ɠc`m+C%:X6C#2#gM{o/d1rJ%ɜ1Rvb"f_"[D1 /5#&{e)eA.dd¨g#'jԾTN F6*{|جc;Z(цu{%@Z񭅸#?IDNPa6IȢ*;I YN+_l]L^-㥒!M2,zQ;' 7 jcfco)zAlsX)JfGN2;;Õƚ! 3 \WFY;݂ӋRrfX.ZQ@M0wı?ļ͊'hAxȳU9%_JI5P}4Z%T2V´,07c:IΪو3ְ"6(uM3+:?kSJdr͖ (N(èrTjb2TFfQҘ<<[Zұ K@!l 箩fjk03r*=,fy-myCBRj =eбnIL,YN3<8N{Q5Ҡw!@IopBfBLnv!dѲ"33`2x B7an9݉?_ըr26u]2uBl~`kr7`k915WD~y.ۮpI/Z9 j V#Լ]:& }=\+. 5y`bT&CʶVqt@e 䥧ARqtKJJ^.PоSv6ꕕ `#Ly=\-y.oؒHt+Z2dIMCWU$NQ;ȥb^0R#&?ۍLlRuEuK2]gь?^2X銗`I,KɡԹr2x6q8WRiqCqż]b+=qiY! .6HywwՖP=۶}`XN/!'/u8o[e^ݴٶrdX$A"BRU*Xio[` 6h0Ya<ŃC{>?~Wg3l@2r)Kߔo.tƳ 6{U<:j[KwSӔZyGܳޖ#~m8qwjvrɡDWӮ=Lxvw7tŇ΢7ٻ8$Wy٧Dqǡ}}蝙YY-JGuTTeR % ~~< 9^~;3墵X/ײ*CX-<#^fd3tuR̩hdHqU!Wח\ԼZ 3dLY*6 Kn;޾ַe`@,dHN"B餓#&e!+CH'gjVo]!O"VWv'-o:N|Eڴ_HmU'w,>.lo=oWM{2G=}dґݺq=9N*Su+1+tVD-\ydLgq5n*=x=& I,6hILvHDOQa A')0B\Vm56UPb0nxr$ _w4C-xSv3"@nhT45YRb`LuWʄLsK1f"S$ ;1apA_O"D!d" |x 8hCNYo=73:h(xW $IJfr9{AQPȧRel"(2Ϊ %<q ]|$zD.aG,l (ts2BNBZjt$BFN1BzOgO~KWhf=bcl.~}ѱt h2ZqEY6_piZRyגHTWF .Ho9l,H+(U֎s&QQ1GPVOf'jM(ӧ,ZW]e )wոm >*9u(QO- u$b~Jm_#)[ muڡ`TzXk;rq L"x$w:m$#p)PJs.qh%IKyAAaygJeIQ$!f1o''jؙ/aJLy/>at ۊ$#-Z$(r UR+˷-hC͸m}DG@+q)F{ @bF!IT50Cgݚ< }5mԼC0 +d㰘`LޱB/I&#gPd[Y.]^wqPhOe8coZoC̐$@6 z0PrDčI/;Ry+\z AmN{ܰc_5NV/@;H94Ѵ#' ;<M'E?PJ<-x0j4 )c.&YibM{]VR J*A`2D$)ஶZ |Rj{3fz}7QS!g* EQVHF[+gNkm̓DqDR)]Ld [8HQ&*Ҋ0M-w*v1RD$O$0U)u6,T0PE&`M"d/T W5~:iJ⒴F:CogKu1f܃9?E[Nogq"|{n%&!zldʍ6٢@k.b)A T%'`ֹ{l|0.N==\zo׆t6\ߴ.1->}nF6.QNLk6sMAJNJv%51D!TNWe7U̦] RE21"8KZFgY ʭa/CRwGG,"+c!W#gF&* C0^m>ei{oy5^KV;a=HWY^0k܊zfKN>#:gaK"R2.d+ VV#Tl%{]tP踝6l~ޛ,ͽgg:I4R8:eȪ gg8"4(a'q{+2߫UIRmJJvZ!gևznSVv|jA Ms6.%zY1[?WH[y.F̨%|Ҥ9\WKtO$juj).0epEgYr2;'7>7Tkj>+y55Z4~|/]jַw@Kqz'Zꨕ)\_^.Nb2jse.ݯT\'[ȯ}῿whF,{zȃOL8:8%e]&H毿Zk QK垂NE57_ӜDA(Ӷ-3[V~oZ,1.l5$-)\qD$ ѩTVe6O&}yvۤiLobK/]sA c4gK.䧳*\: ޤ&UE]gFpgiM?dM}S Hhd>wL/te ZO4(N$A~tZΤT_iNxEh/צKfQts[ }`ncFiݯI`W4pbbi^@ ڸyk[gxOtG׳Jnf NdEE(m{<{pM|h~}DZn3[{:5CqZp!JDahcɃ%lε$W, %XxLX6d3Y~>n/z2JL0)%Wߜͨ5zXP7[k銠4Q9BOxUdk)sW##Enlemm&'<>O(3Szs% ]{/Yb_.cx^&9 ]MOix8V8ks tiD+/lÅVI-߮w`^'<%w|Ay|GzIQKbD@Ksޙbj/cKCңu%S&ke8sDhvAޚo&s>+ ȥ/ yr>I;0X1cW*`r?WG\Rv d-j9(TG?ߚ%^M"I{O2]R)4캟`2bv78ARi{_($Dfp,O~ T,?nBpƋM9D/]| יOIǓ_ևO/J_044$ʯg7r?_BEg{շ8=[>% JT^NS uϝZjTEOUE6hȔCFD7%ΈR U ={΢{Y' "9VBdW+m՚B`VܽS%H➶#}U'H$@"4_x\w*ox6(߽Z:TىSnT?ũ<4}<, eIǓC x, i/܉YGQ?wRpyP>w0}+"T'=cBg. IIv) //qW3TJxpg[~isg IE BYTzN_fZ{5ř|=2P2e%*0JKYIH>h1O D1!)_nJrbĞC *vye^ ((0e0 "(ʂF1^-cAk BKu,F(e FRRlu9Tq I+08u~۱1ٱƀBt8s?KtcrReey%f 6~w5e$B1RbdO o†*m2IZbqGYNEVѦ?Vh+"{hskyn'6* Y]=yYcy\q"a>Deŏ6YB[Q~5\oJ2HtIq\\K%De&]q+Nrݏu*_5C8C#3~VhF:^Qwk)=ֹv֜"SQر[c>4N@% t@d8n'S0nϵdsIݙ c&Ɯ/#w\]7)>L`9 +Y(#BJPȢw\կjctx񀲨"z=*d}D?Nwo&_~gY?y~_/&Vzzer;[N~y>yӧI[D-ac]6Jn>T[pS1h {Y/{=}.1Wj,.WBrͤ()fPFA 2q9=]j]l .0HU2a8JFE,`.Zc3E(DŽZrFjUEh( muС Cx N ÑWC)rRI'q-GĂAFYzΑ^r%eɥw0qa]F|7& }vb|֝O g'zrM)žlo]ok1aDN鎴-mo2&7}o1H>OOm p~S- ?㠍3I9LfG.ά/|xٶ"n\\M7OhןVtJR O7ܖ~d WɱU I6\zWX] S߂J*ɧqd9ca~ )+tEF2<~^sTssbt=^=\yV!JxYHMU! ;Ҹ0Jȫ]#DIю5.^)ΗnO4fɗ{Tko{:棇ңL2? ֣͸x9]UԳ,g>SKNl{px!U~G^ LqF;eAKvec ojJ@`%>z//|IiGe'] Vp.W>tI0<ϚRO\ B׮$"EoAJ0x(f$/qI@KgJO@8K03oA^oԹå`8Ex4+<+L0Dy]Fh6q\o6Ԇ6EmdEw=K׀¿oRHBލ՗kc~e!( N~K{5}q~{W/4Y)>_L/ˠA5X(H"?PGE,|; 3!%- ЌTL`*rpl5 WṚ9?w:Nz+ߠLA#׷rC2\R/$ƻįV ߖ5'+_[B^֋ھ?w}qU#zMogMTU.7@_Jߜam`ƭ.!&RqZJ QmES*%J?+2k)!CO l= )u,<8sCl4T_RÈ xʐV)! dO&pTMQR5E]V8'kpO Le[1-0,S3֤\ 鰨m$m7Rݡҗa,w2ʓPy*OCݓc8-%CA)fIYR]r.Qa?A.hdH9nP+sv'CDn^qH}*CWk5s6/zBw 紗λ5ɏwvN[P> noļ716N2&+IUOh#m@-B35P]3F Uٹ~!Ewq(/5j{lJ|ډY"{N̺R9!tDq&&s>({VK 웅uO0b0M ˋ5(ao!*GgΛajjnSbe!EULy |!^o`,mq5  ʮg㤼*CPV}J'*WsWIŤz(lR=='c G3N,so+&=+juemj(b*d)0Ȧs"8A%ɫWԱ|rz H.=挒`2i=# "vDN͵^Bd͵n%*#GqO 0DIx$)0ʰ$/&cO[l; JL}#%h\{o*תʎRIBΨޛ[Gs_N+INJ9C |\BXBO9ь>`{ͬۧq(ԣ|6<}ZC;q Y3ȏ̻ |ȣp"NdFm5ЎM!V6."l {34f@V Cdc&1KHJ9 ;!tO"S4;sA(#A᎒+>]֖khI'~[i>d" |%,|L1r-Ts; "8x^] .ũңՄChN}jC- Z3m-^Bu#VߩfL^[|s;_"cԩmPBOs+IJ$ٲ=IcղlP&qGjd!T$ rZ`d\cG|>y1KlȻ>@ GS>)XLn.>fݿCJl9"pH@ %# f.v}{Ƥ2 M}Gف>eHF 9GR *Rwe ް0:)Pxq13U w7U]N/Lݢܚ)jXa_[81jRR }=X7d)W`[DMޥ eNW/ ײ n\\!)o1Xb_: [301Ko50}+7tu w{BF8ظasGr#("QRCqSbջgߗluW\U\& i}w/:y+g͛Ho+\6[)^ZkCgWngruAUgեJe% A:@ D:,R)'BcLn*~Cd&pwOO0n<_f<:^ST 1|" 6 @D%b,Ktb4eՍ!] .fSqsYEyu>a DΜp)"YHf+HS:4$Z `=VYN^D_ c3J'K-cC-0|Wwk$Lw>h9>Rg Q.Ft4z>*^Ulv̖Df啿[W:}U7 c <^"}) wҷEQSavYAZCӆX&+W(/Zm4WZ}lЋ_E0fyi0vɯ/+n܊vY['ot@w Ign.FmxLP[ k, a|D4Ռk>j/]sbVZěƙ@=9,8 IOr1^ك?% `H}9Q+X4xƒ]6lZpȼ ^A`%XC_gwxp!8{cV:8{_NmVj:CT6ӱ~Ic@3TÚ !SXKys&}?l kJY(p"mg7M-6tk]5GY>HqW~W4cҔ 3Il DOI0݄iwc$7Cht(OƸ;m\[+FY%wz^vwZa:nvvPrٱ9;m,";GdU۠U!錋JYA}v:VN:uVIgJV{iuVK^ܵiU UH[ЗM`joG}3"]1=誶'+.])Xna_eRVIQDqT&ݸEo6y2xۓ_k/~ﳆՃo޷vQxhP+ɒ̴կۃˡFXQ:x\;mse,HO6"C%\H/68uiKcI{}J&| A)ULK7Jq5b|p Qb3^}w S޺~Öۄ%aU_S EX%׶%ҕr_J)Bnzc^ AN\ P tx$STzHɴ%O\F+A iZݲ4 J]ѹ\ Ud0jRsmya8TJ%K_IeqlNK[%FB =VMwni%}%龛 n  jm92ś^RS"1YhQ;!Rik* mk'占s |utl bԖvm0īt^ V'08'xW"{؏bċv=(J8UXy ijL94=:fH6gƅHK}kQO*|=)r,.Le+SYة~WeD-b˶tMuXE.9y..C *wpvyXE\DZ9煾S Yݮ3RZRl_j)%@l40rɕZH)3R,B:DZ9NIL+n1 `5, m_82Jߑx!hhV$PHL}X64,W(Q;c(7}q=pp kjUF8l=,}tI|Z)؎.=pXv,c?}zQX"Β"輪E5%*ĕmQ9JѮlۡT8r`.ArCG;BND0fTPZr#YruCR+Krܑ }>\ MZu\xm\C*U݉wC)ꦿU%UGJbUؠa&bt߷^YWђGebh>^~'kGb<'1vBhhh'kZ I `1WpE6ty#q# rHD2$*$N3ϛ,س4φChsJk[QzqCNJ`q qVgcJ?ڕog4kt$*>Oⓢtu֙nX"V*"|?M 0:two1gQNv止 O4Gmy6C5i[Q /39}m\C(޸?Rnil8Cl8~td.0ڽM~6#Fdz0C>)]Q؆4vtodŠԣ<1/KQö0qrnôɥ{f'qΝkw~hx7:㿿i_u۽Ģ#Kߌ'A=@8?tp7h^+p#` 1x?hg9|2<5=aL81qڤbŗgsHC|Y{ZF/ދh|agF|i\oKn㩷5;o9?qw䴁271tTu-?ut%Kr1pyQuI{< v/{͠;#e{ܹ593IaO3K _}i{ǎ~ZG2#k.д{hZW%b-GnkS-f SPںymu{~CEcFHlPyg!fWgyOH __@"h,٤gɯ~Cڳ `υwx|u/Ӏȸ^P|~탨~ D,XS\OB0?*^]DiF~ߧ!?3\~2Gw2t:Ӫ7_AP?e'!LŴ/oƠI~8g +%v"VlPl&?%bz>i''x#rh2Ug/L@ `PGP`Nc3h6fc1W8kq A'pI<#?A dFۃ9R; 9 >3-QhO<ȴ$(^y!DϤBc|f XжTJII! a+ 7Զxm׶xm׶xm׶6mm_bO,\FG'i4,d֠4pV*4*n_+9x1./Gɴw?a1[ .i_}{7Mwlui= z}h$Ijm*$^y|x,$n9{P?P&H&8oN7Hjt2Bh`B=o_|zwpkLF&#YxzVE݂Xal `ǰ `|j'O//'~߇. kq.Złc:\K-NMY ~q-.-$AVsx4̶ˮ|K'+av/c,,"Zpe_N Lw|oXHrN2rc6`>F-'}!}d;c|KF綬@Y79| Sv]!O~8^/f/047)_؆gWH']:]WHs&ºw/7h#qBDQ@=UHGV!яkM^2\]%W5ϟ-edA>Zt`70Ak_S?35ů)~M7Np+kiH'|J0#:DBHV"ai*~p|}^W$&/I)5ɿiMk+~Όkė-|K)vϸk .[­诇tr@ኁ9M q<h)dwy4 ã-)ky2m&54,PcMz dZjmb iH a2!p $[ trfcn4E[hT5];2u,f5߼Z+PA%{ ,ߊu4*=IS6xcRG#,IW2S/yxKюR3S{ض'$ޟ⥈b6T99 Bz"'(\#7 `}jf(Ľ 4$#V+mmjOM lp͊lph4-D0U>M7b&` P̉$Ɲ8ܹlEi  +i* D`b0itXZǺypK nvǁ I<0ШdVQ#]e#*]3kl"P*J`.q 4WKmΎׅH1eL` KϘv0`$(j+awŬzgII6 Xe\{?{Ʊ0/ٳ-׍u H8oY$s$;q/[eZ-c{Ԫf}dH CjݺU@^nU@RղjoerorS۔'SC ?v%;4X-ZOL ZzfZW`L&L_M꿟Vn٤/=?{$j!g>ԊYo၀9 +ym5گYpMHdBpa:Wkqو% (\5^S'VTk;V|&7UۧF}xѴ0_vb*e *yz{asіOo3|ҝ"Du|!ԑӓ𾆟wW_xNŔcn8b^8Cw DӀ(=yB1U3;b5[sN=G#Yv6 f=|74Ԕ*dDM.|߸@ULRt}ibZ&#ix'׶?}pn'!ޢ28G ;Gw9<}p幝8* 9*D L>[Zt!.(r&W9M-Yv:̜;O @quR 8b hӧ0K#X? LV8opM+zB4+?QhV (ƂYI<^$% D֧`  F@3T=H{7̀@J\K>+07^Hd.R죷D(<84\Q1 oE"K" pU@/ 9$r[ J{0+' F,REgF*Ft <86 $e!/PT ]LJ΄BJ`+\BG%ꉀ.ZLpKG:3'#V(Z:? O/Ȕ|ǻȩ|2UR \gTA9@#V$rz4AԔZ) Wprp/r'llF%E Yfd^J@SB κPQqNp)K,{i5Ȍ {(>x6!P6gB+4^R#B5HD{ECҞ')鸈VWrwԡxq6" +qM5% ŊH(g7.2")z(n*J{7^2xr'&I=\*a})_ۦ q4ٺJF}l LbJ%!xp mdIU qmI4ՠոD}AvDoLZ",*p.hEA umH/MeX) u<aEHi,Qǭ)dxM(Մ8D,Pmd R'Z  ׈tTW D G(Ȑy_ĕےCׁ4n۱hjw'!/@ر=rڦW߿sQ&T+>8lVqٖjxMƭ' Sqʄ>j&5 )y\nd)5[ʰh9(C͂ۜdpba/5zLP+Gη]($ P:ik(G4VX 2RJk᧖`zvcdW\љ\"Ѡ`I6nw]fFBj1B,JZp%ql* I,%Z,Ҡ!Ǵy"V)&FB.nB5tS6 ({qmҎ;}-YV5x춿Ӽg}&%se'Md]WWS6{k}]Ðg+nAZnj R$yxqsmֵov- STe.0r闣/3ע(L\hku]wNd.d>]./۸ׁ,$`8-I!0LNa&!gH{balw~:v67@a0{N7/_vxZ&~^4rhV{NOLx}$c qo3TlN|Y49Kv#dD&V] q6o=V"KE 2À<3bnⱋtCb#%g"xq3JyI#z+@2Zi5H\9">vZHO77oσx >Ikŵ?i{ܠܠSuwI`@#,X. L ଊNCxHq3K$Ye⌏|d4D":iDYP 68i !jIbBȇ=[yy}26{uh˲Ŀm?9(1Kn[][+)CYf4"jrJtX(:V,ߑ[эJԺb!kb!j3PHiQfR\G@)aҏPûmP#OLsC ';uv!#oִbj:$Kt^`!/`tz&-x]w!h<{vvNBVUjTzg(UR4E&MhXNpx5?uV>BA."^ %9L^ntwGG{x{mJMM /CK$^R=\,?.| BJ+fb$:**|m*UqWj%#nuWv.`+ƕމHpVFJI jG;:)lz9g goˎU;^-;q$0wtv\f)w<.C8fDC_:Cm>>A4-}A!Sṛ[gNe= 3ʞP!|6͇oۇ.4I %h\IG!h*($$"&}qþ/ŰcŘ"cw[X 6'_4J9؛mgd-dzoܰ l8P2tV Lekuj9,E&뻸CuRNftM62zܺZvdj/Kfol~$5GW쿵o3wB|iʮR@jor+ejm^&;Aᨇ+Z]~gtRW[Q\]/2,/&lsv3ST7߾z?&,t㲽tkQ:ͫ-mwǿ]( usz?\zDHO9@[ia\ D&@peTg.xQ`,۱j!)n0u 1? $i7Tà8핏կR ɡ[)ߟp[腋3AaG$Oev#U]"A~4b~v~ T'XS?N{7w Kh V-ؓ['*8p>{WnFaOrv-ȷ =|nݘ[E:v=vۉO`":u76}F썻Tykx;Q.H|}[ERRSER2F"){E#}~3B1>U rZ潮PT%,U+P\zT.WpVͧn u[`Ȱ2M`輩9.E2q3xC|95Cc}ثnG=nF(nvq񮃉UĖ8n&/%1f{d̀G7#5<\\8'܍VQeZbsy #&2Ab&S^ѨVAZA%e]mL)z4ِq&Zn}΍Z)ӑƌĉSXO"KOᙝ+*rpŔqD?1xAͣں^z>ZB l9È31])ލ9p2Evͥyn#V'Gx* B"}NN '4DK\]ݾ{D,JPCnh$w $VvT 䔴FM:.7:F'޳rFV]>^˴rɄ¹%\%PhIE4 HQ_l'P§DPЈ!<b0A\ .M1>bablŔWҔ|{;~8fֹ;g6Z:WLs]Ieg4XW6V:@) G\yH"b$i"%">  `-ᚻ 4gb+UN2RJlYJL^J @uZKL%"h#1O}NFBj1ΎJZ**aht򸰄l%EO)*nIV`]eQ `[SQ,a©#JCLc!b~m[2'7QVڃAx`h5*Z xTۨT޵q=)Ⱥ,ؗ]vӁPY7Nb`a.nizF#3IؒEȏU`t$*~ \MylI֔,Q0rԲKLY im"{8̒R4A<EEqԀyVQՅ@Jt ZAU1FVZ@gq,e-p֦w p!B\o?\p=UaU6|W1)V$P,^ї/XpI F_spi' ɰ,H\eDms`-k T2k5=m{Z LCpW,2 `Da,\Yip3W;/_6Ԫ#4vV+Ne:{*HZ#gC4q"f %ۢrfMh%qf)BClY6[UWZ@ؐ+L +ע85d&;=g7y–hַwp{06K(ufFo70>kg<62NjKI^SVOO'DX͎ںhW[j]m]Ĕ)1"9*V,VY 3)LdW$ѳh츹/_kQGuήD8`ܟfz=qc0 gɌ'Rp:iYVpkz6Z#Xx8’~Y3pe?愫{+=u YcN;DSBX\cTs对b,Dd-9PH)̣~uvիޠEbz!ײ6F]OM:h ҔL% @cp#H; BH4(YyOmT3X}@\L=z@Y#zD5"CnΛ JZ,<cWl+x{ŘF]zVJX1bV\uV\X!]j7W[T@faK.gy?LQ bm\VcOw߮"M#z_kma;EzvT@r3le-?1~Ce6.um֟^ND~xrm7]~a-63smjT{n=kЁ$v>J<yUV{cm4r"׮|N;k F ddyJ;]A%%*W8U[ TzMxc#;y9f1%d3 k IVJоs/;wlnDGH+..uPR4nBSw/o.˛OKT o?fW֯ -_{4ׂ`;h})O-lE,So/%lJ?|G|uJr˥_ĶR9ciC'ۓ>dYh$QW}IL5䶔6ya;J&ub@lf"n'x;#3I,GXBT&jk Zc "* >{cݳٯܗ2 O89ۧǤ@3epw .޿/-ûN d5+- g;Cp̋o;إ xdap;@>>eUQ!-=3d~r&C5Kcг!~\:$cn<ouqX+g\y@N ۞c}$ZgPq:dK'Rq:iY.PSkzx Fm$pũ;Wg&l?}(yM~r{8YBQE^Y_}a_dk)Sgkku1S-1hcZ4>[ ؊1^c1'*x_cQ>d@ov<=VǞp'r.גJ(82&ڄDNڴ0wNn (k:0cH L%UlD lնאnUbF lKXca7iѴJ@iCf2+Jny1\Z#\xt*a#2IG mqdM}Ѭas_N{ڃ8I0Gj豗8_;vѿo3Q*l'QeDbIUc(BU2(ѻlfm/_}jy( }^ދv{?;j^cW&Rې=0|V>+!OK#U1&/;f|ok,\quDح]įV:zr'P]/du9p@kr #ݛj3js]blБ F!1VCq8[1:qO w/QoUX u)l W]r 뙱荳S{L"U'K:I`, 343SpզMpm1Qx(;6'Gr+Hh]>=P&,`\dgˢ6z鲬ʉȲ#3@<0hҳϯ' Þ'{NgY퐕qv7gx`C1frŊl:u+]"%} ; v꧖J&UZH4ſ^.eI[&Fjs$T.Qmso\JJϰdJ,I:˼aL [$h@!;*\dNkjq!6lۄ<8:],9q6j h>}^13gҊ[f8 3HBr=TC1Cՠ5)@e)Z=EcV僠oeJv|#w[${`:̣+P~ѣ2kكlϑ;~ y*``!`$BC8\ -S}_?3XgUp7]7jEvm؝7 ^C 0vHTpIu h,߮lџMGĩo6E-p s˦! 8q oD7?f .q^d"YЎq開D z$vSZRvpiN[7pnX5YOc(ۘϴMWF[&J;~92I D[X8` 3[bu^<~dS?^xp{Xj|3h$ }"vwXÎ=S]{[ÎEutb5qXSi|B#uČ)qgh!6q6o I"uĊ`<|鋆Xw'/~~X ӃFIf;N[P^k>{M彾qcJFp;iA3ٻ6-Wܹiŀ.lOL.l$q^&QIJs1}NؤH%11l-Sߩ:KYtkhsRdp]lPjkQ(Ysv u4wA2юͣkQѐ ҥb3ycG=)BpYGFؼf?xc$uƾ['3ZN0%n^sb05|wjk4˴f[ nFf/Ÿ9Wcb1e嬑,'2ݠw<O}j8>rY{Zk5d E݀y˯Upusq"[88xZNd.*ԡS`[ɇ`mtu)\Y%x}#tx _NJׄ:_"^ oѩR7{ل XJ}ۍ1 i2Bx30BG?l@x)#WZ/9޸^204VLEG>#p$jN]EacKAdjUlyGێKyV4%Ef8>YVh `c0?]8Lk3=cGO%)İ, o S"XGAP-TO/&ErwV'G{XTmIDXvȒ^[_AhAXfxeF^(™‚MMO?=Ȅ{K)OEg, $u1e Lq#UJ\@Q- k$匰BES4]t ,*$D=糎Vi<|!|s6 3Ҷ/Y< zZP]ZnEg=@Ձ&mJsR Kg{m{<~Tp_q"󯝳DΖwai1GQP/@1 Lx)Gs{AElBRq$1*bF,"ĉL?vk 47XU 'ވbPj3Ձ@1\LK #)Ֆ5_r1*L^0 6rC11h,+)4ml$b\y{ڵ{j#Xch =Q^$ر&([,"f9GFA(>tQF4Ƒj$#DDCT燨gD}/g k`E@ &M.6)-3Ka]-=IH0(+UT0y5_iZܷOӱIfMP҉u~Fb̏w;(%Ȼw?) &=~2.a "@X _ޜ*Omb8|v׳?\&@gt3D&`F1֙X~zpyw)ˈI,j}CqP |=]c%G *.(]Ot vk-{(Xpu/ x>{h Vm޹䕷=*1x \oX8oyw 9Q5m SZ&}*>VN:5Tw'(n 3~{u 7`_ups,84cѼ;07LFc=cB`l;g݋'AT6kmX\[{!GΪ,:-&4Fѕq2 ކۡ ~[M8 GJlݵ͊mYքUmݹdLcF\ nuk˝T~ߛL$*8;=dj~z8k 1? lH';;*gij[q<5'ĸn(*-B{ u vo`}Bk.f:{R!Etw&'Rg#Zry$-춋\2&qSܚ6{Z =ň&#Vvj RNɎ$FZ:K`Q3Q*8(r"*38jM#N)Å|T-x&iW<*fkT Tf 1bL<8\xmy& D)~ ; ޓ[`:QL1xڳ59U֢A[]9wYA(B;GdV؀0(I7LIOB}d"`hdge;] MlZCY7Up䄐f5ӯ5uQ,g#܀"xas["EXrsˑ*d6cW@ U39k9uوRT I_KEm:¿TrAUH'xL^67?YdiK|aƅfCsbRËic8q2" *š@ 鵋Z<[7oNY9umT\~1$b$``(E @xIPnFXIwKX"Aѓ>̦vsTrS=7+p޾]Sɔ-7+?z(2Tz=+u?O&3}`F@j|}>:c1pɻ!MN^ݚ񫋡}xĶWwטDSYeIh(0&Q4&\^ bm)CPKa2WʸfW7o'5L)d:bzk`L`u̞7bTC$bX+%a$j>̦53fwIPAkKEXQ`WF#ㅵ \պh?pb.X7/k s8yDHI(}̈(DXFd˹]YkKtֲtWд}jӬ6wyb9{,\w&^~(-xH qd|#俆0~{s.<\%y-6<_0Sj%wW,̲x |`F;6/?n#`2j;JUh/m^YU=\hFp-3,Tw$tׄC=ݹcuB~ltB|3sTINĮFuXm٢-T{Җxr4_s#|['N ]=:=hbVnOӏכ5Q ͨ ([7rgr"uu>X **\QcH$#ޭπ΀ϛ9.<<]M/(Q߹y}]w6Y˂dH4Go7_^|?]Y]QV=K&os?/=PH3xέ/֙|Ycgom 5<}{rO&F 34?+%?}5Rc&%hӥ|XjMH,MԬ21{0 ʋjV ZP< bk Nu.9avd{d:³uTyΟRgNOxG=%d棋^Ĉ?z+fFsڅ$h΁9~ꣁ$nԆ|T l[ @'}WnFj#ghIi^G$dťz9dZ& s>L60I M!pR3J 7z|)C7|ޖ4TXϰ,wK )|`6%?َb<>uİ2ՀαВs6etR^ -&H盗Hkfq܄ NZFW52806`O]\=/cxJGw3q=ǃkz $\G\|>86_OT(}AOٙ5A -pֹt zi+8ZK$J3ds/B dDn2'z+ LUU+t%60p[!4Ǩ#5HkjGkν'Ե3 0]Xn͂w?ߏJm'v*_ R(&u9LR1u%ĵ+HPZhm_fOu~}& !#" fneei$𺑂aV'ceW7njgZ3m㲑lшH$EeQVQX%L)B TҪ4C,^QoD~c ,Qcq_z(eg#腺5%Zj5\qS`G,X3ót|]Q ÑcxToRpFƻ :&h@YÁRܠ?bs D(c8ҏloU|lR1`S1`N9hã,,t)b}(ܾ8 6u5Wj.Y>J'*Ϸx 伍UڠpoL4K`.װ@]'pxCCsqDŽY i8.!@.HVO } 5';RfL<Ri)k ~5fJ23T׊pK;2ʰklCUq-Kx=}ŵ)9H䁺 G-_ޟւg]עʈg!>w{ejާwY V̀F5ȈfFlU|qC fH:*rJM}+~ɑˎOE.c .,6BYJ(A~K8a^A*q\1$GF*j(/8..wܽ|ja7+ p2i<1}Xed+eDhjؑLHۉ% MFL"\eѠ)<0YUs TJdMmƦ$$a$Uxv>HRRu91t*t;[Kzղ2LҞjW&Om4&Om}N>!pp"& Pab 0`4AZxzXLMN>*(t%[g}*֒=]y`nDFKmɗu=pRdŧwWoٛf=(3Iެ>`(Zcħ<%7(v!ݨ)%U/o64zO?m{m&4 O^'knhqNKiK<&҉Z狶RSHec[abco^l˸M)Vy2- ;ϦwUm8*"- }O3҂ސξOBx>&~s4`[3RZjx{qn# sr_ۣf_㷢o?y?3ukp[/ןUj?$vfu IUo]BGuIM+ri\{)[k:\S$уsq2:jJQ<Ί9z~a3pkc[q\P05;gIBJR&n I%kl`փ'N;&ep!)X@PR/-wNqADԳ ZSS6s%a)%@nbohQZ(juh2/J=A 4ONouXIAd~H)%jJ9~v_?hڭ56R$%ʅS\SCjjqPCe 9kH*X@#2OliaׄhC :?#>%HF|[Qiw8r6`!_\MHUcy3Z /1ꇯ*$?0ǘ2Bs$e[j3K+睁sC7;+}W )cY*´5 .L=#9Ll5zp}ܿIJ89=z_2QtG{,H8lWHb 3vX.,>y?g7#߉Oe: {ZrC5  4gO[v^׾";;k* '\> <{z. A}@<||RxqR.Yٜ9]{專Qh!.J.;lҟF u!RU%oѵ/?gthު1~n ?OYR>Zc%鎶::TޙTL(TT=G;yX=w\sbotV\,0i|q֡zZ՛W1ZR?5_p(f][KՒ3.7ǟ=&j_QHWP[9k Vؼ78fdt{{J0%_F^~d(Ctk֔DIi_OߵI sZsh5ڿ=MgrEjcO4OkLǼrcsa4Be!~AiQ뀉ePa Nrjܿ_282lda'P>l}13x'/w$2ܝ2fGO1uEqiN yxi9.:-i_Fq۝%U`0_ V0 @ͅ6*)#-4”H';-NTit-,u(:;l[aGpebl. 4"3U#u-usԹTA8S:G")A/RηW|R=% B+qF[V6vP2I 4(WRMfiElk V fԍQŘ[`SbgJl8-;\F_/*~ly[a0=JxvTzw߄5Zr[3w<~x `c+pC {w%3owVBh3L̘8ܒ/LrAs#3\[Ƃ|V?pAYRH^9ΩmqNo iy#y )hŚF4JS=\` )܁Oj#Ddɀ=pƄ`BV'mT_@kPs)"fޖn|N]o @`<1/jxS*X<j!xGroh\j3-IЎs)A> l<+"S}!2)1B)ҹ/:5$ΡQ%!jEd <3'Z\)jPJM,f- g(h #2AF"WD ]"E*ڒt SIqB!Z%^kPS-?ܽż`?~۫52ѥn ?M8KUy Թ;Lϟvڦ|Df1td]^l,"7\=w'w.z*G0rY;F k6֓:׋.k)GOe4y$-K Eda #Y J%~OGFZ)FI7a .f,J'c0FEcfN)$.E+2z2͟NiZO&5 $B1WJp-<'<@;ڙ s]* R{އZkSV- GoM8ю4"f"[4iZYpӯ><-u<Y uQ@GUԳ)@r6hKje l|@^.çb/>.J6Y:+XgldȁbCJ L$mTt0jȐ c{// ukAͮ_ӷsM\5u*GNH$˹O2':_j։T.@-`0zh.-HepـTQ&=f)D ,7AJydہ!рFt6^Ŗ-xsQm*vC 8曽xv~0]kgO}|g{e+ٽ hQdE=Pt!GaH-M `5~۶sA7E#x{-9r(݋ W\ԣyxXhtŅ6| ?>=9371TqEf[MUfc?n ;rUwv ȴTCm9@E}CC+w0j q:c+3YNIRgo.?ь{*O҂[IaWn5l6 jh uGSm97]/K`n|ࢴE.:"m=֥+>9Uosp[*Nщau|7MHwcwyEbSʹuǕ vtѮP;:1 l?*,пFpj^AAQhG >wdK,[hU[Ƈm}FZtYN;YNOOd TQɍi.Zxm څ~<| {O$Db2 >]^y;rF=d"Wa>FPϘ\k4ۺb?LhNkәj'DWpHR `.V.r-^<}0U;ъxLǥ@)P11P%Q1s0*8h0*4ǛU`lϷgHEΚ.} AFK:=\/$24q&ze!ƖRcBäSN$a .D`ާ,1 =:I6sHr>=ܧ<͸mwd7ZڪT~."x |&tcX|ڃ zϮڙUT$H7C# RTN1 Q"1Iseic2 9OKp ݧt}5=S=tO>T#7/з {ۥ ):Å˧p/ e(M d$%m3fgYZ_ҥ$ٲU3nS f\1$l^Dle ڈyE6!g҇nfEU۽\fJ5ХPpSݽ kLͤ5hyPC]VRg2GK?_8aЬS6=HAT9ѴJc`~ J2'x{$\w%?ZIՒ~{icYP!xR*p&b$]B3IΕJA!Z޺0sč EgLjK-͜Va mRyCpZ"q@YQ&o2gff94\SVvApnthEaAtRQ?-iLi1V\$v&jV _n{6 BS4tisتtU]3F&%*'/fހV<ƴۯ7 ዸΥzB5UO7͞~ОtXJ+I8mA9fl2,0- U)/mS-Ue%մ/"Vq~=! Xggb8W{HvTŌV޿`+|"@+ׂ.0lEj޴14*P +qh_0bcK~Wq{DV0'؏F4#7{Zsu%Ws5?2K|-.K.=Er:vpA߹N5]/|꫾+GۏYIsI3%0(\YK@6h)P 2D~m{#Ik#Q,cGp_-d r[IWq7/3tN7$O}6խkE 1@셰A $1jIO Qr$ )4yh?\Dmo1!|mborI}ruJRiQܷMiBJ-9۝_sQhrp_{ُZvLx/?Fs-c- 6B3}[ĕI)j3-$vKj/nT ju!@tnAI*8tȈLRSq!/%5V|]U"ϝ%'B R\I6YEn B')sg3iw<15CV:@(Pd%YpZMsKAŘX27\Aa: u<5ma fAp xC3 - !/~N\ 9ޮ^CvGf\<`x{ܶrH~N[Vkg$ $^(C7Rl% ӿ'RYġ)F2D|sUImͷ߭$!\K=[q~2F )،EvHZOzȌ~y1n|%zZEV0|\wT3s)0.jݴ.cw4$6;J61'"B!K"YyE!haAgVt9$>gǻ*]\7)܅QAuc6zb7d3 7hfcbߞ*1UMUI%@R^QGrڕyRCS}!G!SӑTJqJ:Iط4L/iܳ_yFqj#`K,JZ$v =*izYxhȑ@p/[ONcB1)=fsW7ijX%,Xa%eaZ+j\vywRJd+q]8g7mO4{UV`ʛ7^]#s `7SSɳ|"#< 2 8}O&+`R3e.e&MJZ?c sd: R cvxWn RnUH_\DKɔDGOnh}.k$Ҙu+=Т֭ h)bpZk4M<Ä@u~ox Mй)eدBMʷ>T4m jfq.k8M$ֺkJ5z T mP&P;x2q#qB'T J'ZJJJe JD럮w{oSNNJ T<߽R*eA*9"R T%*!WIT$Oᮩe=7 M@GᝥΛu-mEr usLwߟ5@w'j2\!)9\աY/[ #XH.PT( RxnQAeJ:+pT4)I :aFXPΖ,$=ogq-Λ}4t~7irv:rтس͹ैCwfKh.%kE F*c(CJ1|8 t]ۉ/C`ZAIyG[jN). MsR!>(T~Ӈ|e5ۊ1/~MѠ_>+)ld)QJ.҆!{ͮr^atF'-0qFAis0\P*B7FH|%(ȭ6mT5T\U;m>a_MU۩6? f>Qj¬݃zj` Ǘ,P>pE.p۽j!R=}]^ V?,ٻm}bXv< *Tft‹aoz/v򉷈!gOC!W`$blARRNrV(t^ 6_RLE>vm vcvպ $]wܮV\dmGְ(A֧vc`?,$֤vCr4u P7B\>t㷗Q|(/?uI?Ԗ&vE,Q>HKy-|԰586ka f2mcktL+3HLKzQ0L8hHŜ'+s jrjhnb8RZbekS ߏZNg< %"10zAypĀGބ$^t‰`! aDE2a+# "# %blg<+TJT'3`f`#PGӡ ?z~E %4# I"z}ل6N 7=oB}3 ߃]qݝj/b fo]s*UJu$Ö8[J~;\i:Y_YY= GRG*

**)B)X,CHdo@Eď8x0;W@gàG_U0\y^e. HI@ 9bg? .!My/T`.M=츮u&vYreO aIܗ2LfnFd΀ƲנYhLp"=LٌGq,M0%;oQ:YĂZφLw'#{=lڟfq>`>2M)M[=FƗg9!ijI jo4A-ޟZcrsv#SaVU$$zyy]hma46FV˸\o;I}5w>h7Nvӛ•CK]7UF_8x gTf0b'ZPr.+3|v`.w4 YY]_KӡYr:k9̨XL8of%0:O!Ɵ-cy~huRA=e'O"x=D a6l"7fwDC\rC}D(2SV3%˜ \ c ĝO<+&UKh4H3]XX`}e&[aNӧA$}fgrs}]W{++Y]\ѝ~"++Fk%, B+.T|+xH4Ml <7L%.q. ;RZ+ Ԝ+`XŤ/wdNsGDpNK 8q[i%Xbn $rB̸}V[X9wBt/}3^hxBѕ'#?:-Kx"6]-#aBmiQ!!%7:~~shIWD(0|X4vc܈G,'WE`I$BҊÄ_DB*meQ+~8ͭ.JyуR /x/vw,z6yu j|Vх/oઽƃ>w |G;  0K1,8ֱKZٟd٦j9U88H7qz!3F{1EdP#%>+,rG#xդ rBlxƉd "6M,eZcԌ:cT}ݤ7f FV } bЊ-{R!XX{)ﷵx4mZAk7@O+\QAYE刻^uϔޟh5(Fd%蓊A%cɪwDfٻ8ndWzYiw ).؃>^q#$OqtqԭaOwEĉ-,~Uɢ+z5Au*q UEj<VOl= \sS2ƋcJX~pjA5TgI}MkFk, QD~,:fpQ6@(!5ɬI%Ddg+`U2eV_۵f%vо)Dۻ$ѯuy#`մqKRm'VgHGv`'(t3t%kcR}_%V7'[wz$$.RhgD'y,Vb^ 19.2)KI0RԎ9 VTf0X"0:A59Z:/9\lzl=fƲ0rlĥ6|`؍vTv+Bu&èɿW^71cp򨣨0!`ZƸ_ mJi\;) @qi\$jzdWyK+:$V)LոT2sx)\ّ}1:#דv>k2Ƌ\>>Ŷ-:pugЮ)D] Bfcluh Ƥsa̅E'p9*͏Qw1~ρE>ӻ3ABx'㍒J $>7Ʃx>YuW_;G(,N=BBKnϙt/cJ̝hE~c 2pם!OW7Py[9Cq.WTF.Ur&%YR&Jcż\ļdF1/v3h/uPgZ8D*/]8O}g΀dl)wwz;@*M!4WV->HEv,=.XWke#r>ҶՔ 5>5HfYS߅h-Ytt/Wg-ozݪz틭' xtiPTibR P1uĕoKYWe Pۺw>  J-葙eZq3kc(gkgL+ZG~LȏmmkkGdf}AA#3^|^wqIZ"rȝGќ-Wo~q.;;i>A qt ҃(D(RbC%-*e" )p|5., ۑm\3WUZx ٓdK ݿGݰ#Q?q%y#+WP_.Iz?*pv ! tH#vCoށd|Xh2MόKKP)4=OGdP8Ma&" A} n` uI}1؀*.p UVB"'4>R&Dj (*Ac wJ@nc!WD acϫqtE{ 6Fdq>ë%w#KX ?D{ͼ93t7׋xy;eNqw薹wؖ5ɿǣ~:l~I˼O{r?g'{~- ɳʟWy]gmmeYCӭsЌMZpk}]-)Iܳŷnlpm W?o7OMb#sBl1"m[f2T rԜn@hr.n8.EXȼ8oQ_A'փ ˸X~T VQ0q0C-@ 縔^uczHW@r7׾ӁޟWv;!T_' h؅?q[~M&Kͳ8:G!}rj.E'|b4X/ˋ3a6W{@'/Mkר$.Gvr,"ރo,,߮Z5oç4P~[T!̞Oe̮ÑO$3կ]/'L-f[0Ͷ`Z=rz]޲sc^KU^Ʌ8Tnn̪q}gOJ*Ė!Iɦw`ёȭ+ˣ n<5Vj:L1>9~ 9&9I"&Ex%' I,`'{a%U胜$+GV,)-[$&vr9x2yƒrsJQZAҹa$ ڽ Z}@ ԻHQG*IkQ*,(q 7>@*.@>7a\[ n:xb RS#:(dK/bNH'`QM:'- 2Po㖲DL6PEʀɿNn )T*j#2|oUU8Sj}AԒA&@q +%E30tEQc4^ʎ==Q Qx58= 4Z(ASFQd@ׅaDŽmY4>4jkCU*G{UUhh AO2L|+CU)AU'\ %%%I0Jx0 3Kw;)SB(0;hHꃩ]h(bi.M7N6I]O1,ZOf .Wo")eojWO퇏oNUg_`~y[{^ޜ৴Zzs{l>:+N]>Dr!oO>{~ πf)􎳷8>0aXO6A⻯؅d%Zs$oד~FZ-꘯|niĨ:8mQ)H z[U}R2O|IJySTTqlR(AmcQs`,4Mf?9"#7* 坭hP>!1hĘ]bG-"agL 7x9BDŚUVOg!zA8Z`C4g\o~=I攲|\MlT|7q)YlplӠd}i;?u秹^wuq,t5%WR4p͘12|)Ay:3[}uwaю \!q+$NsiBb]>rFupr+"*ZW3nWyJzxu7ISQ4n>7& 'N a؋莣mB&r1aP{ } UMI0CEk}O/PU&HQK㫨 <e @,6!p:@Dm*BT+RsΌ@&, ؐ!v2PrY0;U {F*"eLf  :g.w\vF-&-Q5I"Dvʀ+>$$;B^N $rCx4Y1XK)K}@@Ʉ dMr Ҵm'2"[Xl̮agyS4er8=9AʽE@Z#Ih861] nΒcCӓ咄^?4fǪ:R̿H%SUUKW-kj7%ףAFEhk@cPZ\]BmEK ݒ1P.5q^ǍVD+34:P0Pq,T_$jZ[٨Vmh_2%]}Ao Ss }bZrjW)櫔5㫀L=t nw hN]r:K^y_"J1rmX|#=.;y5cYhݡ{c(ۢTo;Kuٹ;.jRm3gb6w31[WQdl?c㿥1.m/f2vQlfy2$ީ 1UWuTp>kd@Gpx!80>kfظ/M] dj 29kfCWPGn31J˘}i7{v&'lt 3)١|h˚YfbB*˂(!QiA $$j?臸8W~9I *@UP42HJJ"svmV@ EA2bo6IB8突ExC?{#,y xq;i;ۓEV_RU"RejRYiB(HC٦*PYm: i(|(J+f1V2V)d~}G>\.Rţ:*=RR⨖rN~NMu~{[}pp|C Cn~W`D/~(^X׈=|_+<"KxrWDĘB5j#\!#߾3VA/9pɈ4 L6@l@]NSPChS4>coUC95 +NjwiāC M?惜vfebFrioeJ{fp\BP8kBV42:Q|} ޹mRg~'dyjcPޓ^/Z}:[!2$ D~)v9 dޫ2C@AP!/09ŋgHyO'=UZdxJdS~:B5lbukL^o,rM U:Id7NČ̣qv:sZCdó3Ph(Q>´ ^2B _{$/?=;b_a)gװbQ)x/(dހ-M7" (FYوo୓j3=M0q-# qm,/cBxJQ׆v aĥfA vfaghri՛V"xt%W * Pu1SkEkxKU9A[ )|' 9EJ:iS3(J&ţ+\/͜!j,3L3)8hC(rDHF'DiT#ƥB1)U柯E*Zf\d= ;jӺMdצw+i6F۾? F3jic4%xEI"ZôB-97yw^gJ?ݫ>W_~{tpVq2wߘ&q*(}j3.7>(ao5hgKDi҂@ ޲8W-cl3 bCkAB8E A14( \`Ug(GtԒ.Ŏ7)Te*N%[` $bʜØ%(ݡf{z,[rlQ.v1.(3(.V2d˾bpH@l>gl͎o<9w'=΂Oy-!=-Zf/_Syt1<<8B-M/ozh]zv2Ũ]LgtFkL[H$fs{m85xc=jD2 89rj͹1G(IJVG3lg<b/sVL %L?*XP\5*܃!NͿCb3FuZsف A;pgs%JUh]u5M,@LCPшVѦ\54(ɂ2^m[]R)F>99ih0D@E~=XtéM.it>J(7#|$ @:KYӼhh9SAZe \t)s Zp#| 05S2?MI)Ed*iu&YOblDS3>Am91.T pEp#kQ5ۖ6-h(V8Ý!3D:J/ &(Zo6HE w҂^FVN& b\2S?TNsv̤_~˗ߡT hLҲ"9MD5d"Z*qG2v|8d.).qlDA6m:v^ %J j_&Ɗ[z}Am4+{TH?s\SN]g3qt [s{zp \ LO\901x< Xb`f}f o=;ROZ O;y FVJC9׫0}2ٔ ZgfeYU ki(Z(,<|\'2 u@t骅@3q& J+.l@ުrؖRoT҆|(DnGѥ1k/OQkxI!bs٦AŒ?ݹmsץf]+7Au)h$$tOa ;I`1:%l>j[O 6gA]41_8y8gQfS*o@fgLKO -iK>Jbf7N(0OT@nzƕibWC2+ȩX9_Kx0Sid?)?_8:N:.] |Bڋw1h.p@Rlu?}".)'S ~>1G]W^&A/brE]^&ҴOQL. |g8?58 &aZI |Vi͈ZljZdrx*i9{6Bz1 fn!t0a`^)5z4}( b7%Iǝ78 yÄ6p/yJ.NyㅐCNɣTI+,on3rЍr [YuBoX4?v<U3n7]抲g Ҳ*.%= G)L+@aËPr>c׵"vON1*%tFOϾIq F60Ɍ?=9,F-FY"0&(?bwZyuDHFJ'D7g?N 9ϓs0<Yē√fEw2HB}F2Yb};RvHb]ǔӝiAx$8J*>KGW08CFJlN3Uk:0Z;"rAZϸW'"`($餓xte,ۍ1$c] vSL͏GW28%"C2:Sb,7ȣµ!q N{ T( 21*;Xk VF4>_IZ Yt[UK# ;F\+UO@[|㨞Uvr4A)Ǚ,{ޮ;ׯ~:jfn%w_O,hqTml Kֶy97ATupc(UfWg˧k'>ZkH\UkX;'Rf8R j #b{ݔq')1]u6Gy<}O?ƆvlXұrHG;g]{]2ʨBoV$w|c'yǃv{'=?Q0=5e}'N XG.Rj#E-PS;|)c@'D qTDA& /8cJiCQ\FhL7@SqfĮm݂: 4RҚtexG>3dL!Csaū6HlI;ZAQ4@W"h2ۙONW285 b+<,*Eh-(*(OLS)puo|LǷ6e- ^b >4"jJ dDŤ2 Dr~}>57.x~Uq7My/^-4 ͥ]Jg,%xх:Xm^T[>Y /EŞ IZ{Q w% -dV;=qvUoQjˠᩱXڽs {z@)]Њhq{7 (-Ft2^>H)4;5;1T>}sk葔+n1`Fp18z^{UWOw6wМ D]6OzrL ??9)Es6WW7.jM]-J n\Fg=FY&pz{h:.tc UJM*juH<ٯ.g ֆJݾvxy͙;["h#@d2ꗐx)&:g> DL٘t&6Ζ 6~n+ YM> < x?q,7i}p_ElGarh>;%@cy-R2]2.^m0iR m\ k7I4;xI /m2{XIDljBSe'{>!F&K;$A*"wNIgETV뗚+W_p@hjL 7ga$~^J , 7'5 ۴/ TޫZz|,>1ӯlBԚVMLc'Gfb5gغis?9HbaLj4K@ONwmm'gwlplo*uTd_6 0OHJ!)+V6HJR`8% =_F_ڬ$\>U;qrBZ hyuQO7Rt-cᧄi._iZq .Rfg+>;WxFWȜT<.̢O AYp8_/kܙ&6=fpv&j%2pզs}geg<ʍ_W`R{Vv;DF~t}n~01fmdN5%4x%Al.YdVtGdi-b(hBθ~_ T#O:N,stDT63WfC ]iа|zvr@jPoIMÛ~t;-EarRJ,5,hQIxQav2=0\\N+}M ^Q4xŗ 9MpM|%_hʧ6N$<\yJ4Ɇ?#8\, ǵ]DkuLYZgVx%߳KkJ9%2|2MyL玡#cb_>΂Rb<^ٖ=Qݲ礘P/JTcDn0"gmmn["wElJ l*n]8= 1fc2FtKdSkKn; ;v4mR;Xu~^Ay[{Sqd鷘܍M J s6ЁB)",B"UBL׵#j/u֯@@@7H6w@i oJyCE5ϥdsԼt`eВcS+ɕsn\قki~)TslG>Ȇ@^wF@ 8h<:jgWp-|࿗ԖO;5S}rFHپ߅8g}|ԮBYAiT@8JԌY(?G=Q'\pȞxv愒`Wޥ'QLחz`+Unt٣f>DrhRht'ӑ7hk,N/(Pckofa{1dK I)XA9?gm?p}v7ߍ5 M9dOflDF7* g\S`]|?o>]YJTD"RIʅN E?~x~IG xC(q 4,;ZT4SLf>':'T R#3I=@oelй@Ⱦ^|S+|0L:JT8A@`o:b {>> VfOU+J,hW1#jaIr P7If"fq *X95eT 'OUʹS1\*ЙJR6ƥѵdK)k/*[&jhǙW枍'\qI=RwdkL -$Gv [*Kv0\ |β#w77ކAoAMR?94~%sjRi^TK Y|B6>h¡&(raYJ3(q\>ؿ#}Ln[[XuwOD༝ݡqz~>Nؿcp7_ʃ>%ך֧7nw+vưƣB-j^S(.e@kb8N\VZ4p e]ȑop!w[Cicެ- :59ם_k9eloި Go'nǛ |w7n}`f.B︛zf<^coz{6[og;;ptgC(2sӽr^.0z_ s~QC"f߸1 Ag4:JH7}?8{CEKJDo]nm}oUBienK5m: @e_yǰ[w4~'2ax+κ_ HD~]:uDQVcCMMw@\u.CqP\2׹,)V&(J)i30s40S'$*E}HA]UZ:4psJ'8 DU<К~t?L-(a !aq͓YFNA;MwkMAB4+$HªieN]k8kZ6Rh[l 4Jwr^}{ixE- ϨcAdbV[("QxC@Hч/Cpe1h`wO<[8[#4a9# 1`Q1nc`3^[v6 !iwi]|h(3q0jD(FZ2ȹs9,93ri&Ddc-a d5۶UbEV<wR}m YE _'$YbsN4]N%Ԥ=Hʑ D8q0teF{G*6&,O5#F8+dP֘1tج}ڨ  J{ZT!`s!،x*'sU2\>:=/]AGG);[DL|ӵoWx!e7B,Ӽ/vٗ]Pễ/Ywo{#'G~ppۋ>#L%'Oo){%_{`2-Ew`|uǫ4n8%H?܍x pmadO.Jo8;uq]CqP֪¬J4TYJ| OBVx“c6XOe( Fs@n9,P0y2D8*1'^, IƨQa)Q--gi854ٕ炦N\K<2)9ȼ0eya^,**<2mEHK.iG^M%QsuDpL^MyL 1g?_;W>[D4+98BжE˙<X"=3й<-:vƘ1o(k*%E'I1IVIeb 7I1Cb3uOŢXegּ=Fǎ̮' 8 I`| ~>qFa#kYYv rTp ~Je Vgp/XSrl EۆhyF|+R4~V?{*xP uoI0XJ74Hۻ-R{RW)d ]]Sow)*=ɸ ŌҬTg@Œ &t6f&[xicQqn^r䒐x]$Fw}ϼ3]U1Ͷ+o3A; 2kW6KAֺ{}J<2hPP?\S.ݿ{Y2|]V9tJV**7[E^Wjng5Vj.'cK^Kz!f=I`wh>ov ,ݞk(vI"MWq7> (ϋ~^.x0[V*CoCzƞ,EqNoCj[T˱Dc3TfjfdV~c[eC߂ӹWh-t5 vMU=Ҳ}nglsUT(aLXg邒u/Qk9tqs Gy 6e3ͻ'*킔]J|B ~iޅBc 9O󶳽|r3ݤ~K}(0&ٴ> {eU?F  ~6 ]}$[ТT0R_ŇRҺphɷ%K2d>!@T&,$HRӄ˜8ySlzN /1mkB9$د:ŧ-cVU׳٭K&w}ٗ8XDJul+O'y2ּB|m2bֹ0x!Z;n?g3iEBٻ涍dWX|Ȟ_XRQ"e'}T``sM AQ\g@ )PeDuOOLO8.(<)o-I~2sw4`|@o=u62 5QiV/)"3,Mr0616!`a=j*?zx7۩) D#١sI}06Ƥ=z-\J39[br[vWv5s [o]9/_//|20Mu;Ǖ9ۏ"'~<._>f ] ZAQI$ظZ(F"-(1ID:Ɖ)lbآB7DaN"0ֵU9e 5d984Sl8a8?Ԃ̏ђ'@hg˪d*T^u1iQ X$2V05 Ӕ(&$DFM$Q3X@ܿ=YSaJ.摪,߶N*ԉ*A+p^5ۯ)Q/(5DSҪ7:#W";a1c0nB@?āc&aE^ƀXi0*PC,* 8cM,E ب"h=<7>/bF'r)7` k ϒ=%aE5[6<%^PoaS<XvdBټtXBMdZeV; z=t|hà͖H F{'<]ܟ(e{l}XǎFH|3QH>jsnйO3U(m,M*4$*, H$!T%Xh*HB)F0Im: ňG {fc/%4 !|~ ?iq~hj 0mWW 1œ5K%A"VB#oO&uMW9_L?&.GwnAi΍߃S*٣ZyEjWk+D_W`LﬔYbsI׼c!Wxz]}Yc7J \t/A{ZX>^WXsVHXZQZQ0wWq!piubQ ]\cFFM5KpaƤ]E`Eފ6đf{Ƥ s} ,K)^xaœedTLRÂ-*Z.W$0l FP;iMVږ*M!y弒5 2[%˴>3*z"[N{4&\1%c*NH]_Ⓡ\|R =OyM2qC(( 6J3dა#Nb(NaDJ2QKȱ8ȢHJ0eX K#(1DD&j8*TW.W153rVżfIQ͞*3XeTVũV 2 TH-~ǣx80$HnΘJd:3b"Ĕ  ٪J@ I~Ū>TQT/xrjX+5V!y٤*KзJЭtЭ *EQ$Uq Z0T hZC 3S\k7D7LRLr 4c:[\Ug !~8n֡%ӞqRA-x?N{`ϾM66^&wA.^ | f̆&pXoE$: D 0Ed`,~swзN|44^(ުN˕Ş-!ȍWֿ ;[6/]tqo-.|%9^TEA(ժ()L{C 1Q16P+.Hc0Jk&0a0jUBԨ=0[iК4m]˭iVη+Txw$&kUonǣ2J92Ȭխxͮس S9;N_ޝo.H#^ԛ.fx& WHo?o^AyTx^Mz=Pm8P S=)V'>\6L@onݜS%![jFϜxC)B[ZPv(%/d1wgSG G70e'7|q A|1Z}V!8p'g}tKnKWf܃QuOϝJI6lk0'SpӍٝL1V'm|s^kl+'߷f<eiΥZ!$A4oO+ c|>l/F囗xoKpھ !E%Ą\ d@׍$30O>yJkU~ƙ.XSl3fܸ&0J4ZGu޳@oo!ȍ b\79zgрw Ɛ CИEP%#8 84!GZVȝtnwWSY_lt0*Lϳٺs[azE [-xMwl/-0%Aeʷly]0H"0]qbL!E$B1aĜ&*fu|,dNf1Q,=0QMKC@0HfLF#814-1߈$&[% (X ALf'!=P]r'{`7 >KS5&XArQErJj{+:$GP,Q BUMBUp(![.@&j@, YP##̀n%Ȁ6b° MlNTcmf- ,)A ѹa8D)@Ze5 +^oAEF`;vyFnGZWwj-3 44NBa4 1=k y,?K$ᇏKu~r:̧%1YHwKQ7u>hЃ[β q% Q,b$ၶ N0Y\ܢ[~} # qb%q@1>QzVX/qbvr C>9hvǗ˨k!yeݡ +TV'>-NYV?#,~0b ozZ4%t=`?آEiCL8eAuW?9_ 7c8VfVSTӂ R< y20_Jrj~rk"xFL߯.~;jߎ@{!}zmtGE;覎|wMW޻W~ kot2ȍRm~۶YB&&wGW_8g4o=={8g;{Y QЩs%a/] ɛ'3j܌&Zݑ̰:ʔ_:rtПїޮwj}yf}؜ ?uOR8O@ޡ9SȬ1gjl/}htIz<%xivtW>&ܚ^tMd|2 ejߍ2Rm{;)wKau]}=A?z O\|y5~` nѓfҷd_`(e {_!w C߻N޽TY72{y :nd~Ƴۏ_<-\!\Zlt3};Gzb͛ߧ`Cܧt{8õn*ٶӹϾu/Uov{BnpL'R}[!%џg0E$d*6`r؋$mTӗ[U vɬ 6ֈ.l-ֈe3:wY3ĴQ+Jf޵=wf,x pƻ }Zi Kin3G300q$j1$IkT:޽!pOG & 8)ROzBQ1 cTXXPLCn+3cMbF5C+'r>aپ"h.,M ẗ́] vU@Z *ҮRr\֔+Ƿ镂|X8=rE%J-xADyCOtNwe=nIzl>Z+òG6nYo$dE2uCGE_ֻNJFwRáyLm%$\sů4E[Cµ AE{|moݰg]WZqe!B2 PA Ig u TV2~ۻOV_-:2vp(Ӈ7Љ)M VHR? W- e,% }*ahD; C/D"ћ; Lo֖z VWx.3pxgZ# rG D٨פ&ơbI sX wiJm BwĔZ.`W%$q sCp $i\]M0ɈL*N2;dךjUw uzqx'8ZgN&z5_-ߚ '#?' ѡ̚ =k'` ,A]5I׆Ԕ(ɴ~MWV()f-d{MP.CV)-Q2p˽ȶcY2RL6KgzY(f/::LG%qE7)D>I;D&v"Ќc ztu7!lx pJꁸ!NY%vjnLBi"3eɈ򹢂! cWVsCS9>L`Xsp`tr+dc퀵.R,'qwYlվs:f=sCi>3},V8Xl.GbzPc>obq,pF0$~:eqB3cR,lS W-?Ylژjiq awLЕ$ZrQ{r?08^L#HhrtY3M~w8D&BYdceLuX \RQƳף|fSX)CTr[ FA]ʻS - -c".2v}tZTI>ÔOaJa4]. [sSwS_=LmOdPg =dZFw)e쎻M`L!` L^}Wuu*`w/>BS՛ﴇ6k<ޔc7ף3A?G~_8\`Vbj??+::_:/ω(sIf nq9nBJ(Ŝ}D:' Ϡ#tܳ0_&\vvz%T߳a*#-%Tw4noעEROMy[we-{%1N:6Na#ϰQfLS‰۪罫k#I ٘: Ї"S"tw8eN)YA`n44]}8:>sԫS8n GNC$&8;\L)Ǿs)wc;*z=lgOP!jȮD=WD]aB0izrEϙ6@KBpO Áz, '0񉩰CCЅz$*gֻJأ#lO/][X Mq*rzf}Wy|o߀ l0q~=Xp8l^s7vV)zTO`׹D+Պ5?OFNgЕͦ3xwvk5%Qz[^5^C8p&H4m.ԙi PJz8,˾e.M٘߭"R:I;B)gj1n^K*%ן7H1E0 *R3h!\ r^<=N"H(CXnVksgV rZ62(28̨SB KJbr-lp0i>aXW!fX(*C#~K0Ҹ90ZuT6(kܻ&Ìc͗Dv1%+b,Ͻ ,'p2 H9㙵d  8ʛ>(żB#vkQpn`5*Fh'!2dbcPJqL6i":KO= "Fk0c&^kD)ιw3hS:sPξXbGNXB"Sî%s`ƒ ;Q⢯U7 Ae Tb"(h0&ALr0a{*̚,GCs.R0uatsw sIE&`:eL R d}{p?e0)`ܘwRok`qDm'N,[x mŎU'LOcdE{"L.%U*RMW7:0iȖ&xDoXk5UxY8B5!bsUFÍKE$c04F ODQ zpSXRba-s<a D ;rELˀIy ŮwXy?""ۙR3y׽ aipveVS0W֒,~uZ+PR+F 'pD:jW?~xfNGh* *)QS#uHT_ұTJ_r?nlN,Vyc#%ҁaJS e6#+*(>*JJzn +tyOeN0}"]i[M(Nx9} 1{ҡ]DtmSBBl&lC{3rXcRqjb=O&II0^YK9}K[fDܢiȸ!-2 \`@|tNʤ@ lZ*ó7S5Vz<#IjY79\dD-w0%C?X5A>nrxpoY*Quk 4چZĿ9GA0n2k˘ _;hGdݽʩ 9# ipJqg^)A9 hiE.=h;^}q޵(jPJ=WV5֍qkX[y<~N̢\Rq5^dgWN%'$.93c+AӺ1-YѨ :Pl**|ri(Si:1>L6O4%Փ\:zT^rAh>Ճw^| 1 ?﬐x xqN2%hBz.)uv?G١=k%d7{i@Yw#˚_WBm2\J5K p( lje _;Icy!gz8_!zrl#N6/}HBR&[=὇s30bKNWU]]U]]u&2*ou.Jeے)O'S0yJ!%(כ$(&SRE0]I2Ɯ^|] D:9 E%U2X&qA$9dctIgDh_I> [ɐB Q+4}(d (Da\͙Tv*+rlrL}kJ-"'_sGub*a+ v(7$ Ae਌&dI+kr5pXsTCW.(lofX6F Y+vÿ7>ܠFTp A& q5aC']clvw}fe[\.4\QX0y!8>'Cy`G8Oa,.ux&|!E|<5HĔbw 1s6>F&J9e0rFAu*zL\?6ûI KOfM]A U/'wC?6L!Ky\R&RÇ"k% 80"K(&BHIc`4^\ގ'5zگ?M 8LƩ$T(G'i9`1^ƃR@׼!f8G7E%Wg,Aېޘ辳Ŝug\;A_!- Zjea1Ben$uFdJG ך) LG(ELulh,4`o ,#w+o\9wv&Yһ1:6t5ݟ Y o!g-18tK_GpY'Dgý nƟŘ{FG+[~e;}{'Gccpʣ'zgЯ-:@oe罳y]LMe)-:P{Vm~wn&c;Qy:#禁Xy 8e5géStR4xcpz tcԵoݪgwj>] jUsf?&;q3:x6Ǡqqk5ikh蛅3Fo';O=znwvP*lb}ΤJ`*P uYB]z7Z!duTσհQ0:{$ x.h=Ed'.QN+:J)1KdP=Aal<군 Q,2ZQ (܃,nVFAa6c(Z 5!{Y\s.zMXH,Z%=O*K..ۿ}oE,>Nz\ҥ߿\5"߮I} ǧJb>XX`wS7K9mvh8e D}QH"Ubh7~|@N&{>CfT$教4bEpM~Ql/^XQ8&*_u"7(TeL)01ȌwͣTI&&@S3 z'|~` |B6_MVwAv*Sl}쮤Y3IKi-5w_Y#$$&w[bu'$Q K>>ٻkJЊ;EJʡ{GwSZHfozkB A;Y(4e#oW.ԢWo- Ь6fdSR\8T/u4-kNkG?'f1gVIy-$FD\$*#cUuҨ0'g۳_/#JɋOޣu@yPiU¸o0vE?fE? uK|gw:{vȓ,u•Mo?]>ʱ#|G3kmTA} uޮN`G2vH{;X HbU [lc` =^ Ti<#ӎgq^<g1ZuĀ;Nj\|\\vΎ^}nU3WR`" 'DWᜤu%;s&Y!YWN9jKܓetε,*1ao=fsUlI:SvtGnC܆6&9 >o%G՞Zr6$:r&QaEx˒}g30'3ګDz .HŚx"eUPX[h!#T I@=XsWϋ4Gm u텝Z)g'wV5S"M#{Oq4Q} j5t =3J1|TjP%Zo=bHxY%dBq"2dCM{O 9 `^=+QxA SaLDT9Ӧrt،gd 9P6| Xo.RGdrQ1~\x8 "OaSZAjS ڇ=TPѠi$P࿑Fi%0Q&&VBm+<=ަ#t 1J0[HYO̒t7Tߒ_=~Qy`R ]}o.@"Dh2Q:?bt΄wOwwч BwdPw8}4M%& N ۃa%"Zh0nw/}qKA-3jRh}L.H%w-T1E 6iKGu A) DÕ41 :Ƀ 'HMT(Z;N,_ӂhzPSk-"8YUah@ReAXa{1s JW,zQ' *`8J_bҲ`ҏLRvkLLJa=FҦd捽@α*]ܰ 0]97^*\&#xHu)=-V VU(5Xl4T_*f>Dٸ&ݾ fb0~X;n"7O$-|MJVZeL?/ӵ+x%sIoƚ英ğVH$U*͟0fx73,-๽d^A.9~9BKd~~>}M/6VO_opCaSpFJQY0.āVv;,[GA~Jy_~s=x;?zR۴AӁK'b@iH(Gi9`1T}88ƃȇa X2ۏLswJ E@h ۃ%Ӗ .=&Q`!ᛉ#̀/}Q3(Q7 7c 䇋RSղg( U(a;,PpTKA!3yόtI]>ɍ=(,F Vwei'@OFB7HhjDS4Bԯ~k% .^[<;W5ܰDĂ-de 㱫n%-x௡KN:LX-K\zspRX:L#pDcDš+ZiofX,_7a,50 ,5- 5)LP hD+2Q{cwB`yﬣJk8'9L9u ? UWV7>ܠ,5 h y.Qw_գ5oUL%e37/&c0wӏ< jB&DiVhWrn)R%߻W6ILW2tBQ) rE􏫑zd i4=%`vI?,հ|3' `,5.ʇ)ü`XFϊ<Q3ZR Մxbba7Nyu#a h&*#Ze΂r+vH0(*Qk#Z#Zʨ\) [iK?by\3'7m7߿J\}`OV ~n0u iʅ嚃qQঋ9?493䗀c| &6+WV n2 %iL͂lU35&  JVmܔA^Mx dv>yYM╦OU!Դkmf~nsoV=O=@&۫ #ޚqK,qԐsd!V1eDwiY.=B@mU~8/zx[ar]uS_ǫ{4ZJ282*(tV>;˞ŤDQLcPŘʥ3s:x +r\Z v0mW 03'KTqzEPjM#O B(`[@wKL(1icpu;=3ojh(JR:2 Z\S.)RNLA-ڙF'E%Z`l EnPSXwZM Z\## Eq<>x"A( Ü3bwTy@0eZ>uo0^f{ݑç幏m՗55&6=H7̇0/G8f)O [Vp/bҥ]YF+_.0k9.pēÝ=-N2~$u7["h8vܢSXWk W74̍<+ŵm])`T**E׌6:+zw޿gZ%O 0y4'&oq(r`mJ֞>'kw?g[Ϟ-!Ndhsnۼ$7Yc.P`zu -@tf]:Uzu m>UNf ӥm&9P$rEܥ2e$?}މ1#m YPN%4Š;Gw{}\7e~*ؼHhǼt}wݾ~薳|t!=T`ONP=rPSyP3Ϻ;͘(d8;QBKI $}C x8PE͠WȮw `^CN`hbJ7E(K N&E Եr}T^Aǘcp rǝ^dzm"뛷Q Ki"( Ac1EڇG}smV&RĺE),pBֈO?\³ \@U"?&ez;'-!HJovk{߄e>vB;5a{2HdVA(Q _?-X b7Fl[Hn&1TQD폤&YdqZ9}v5/?^2%ѱ d,5nfRilpQ/ 8\FI=Rva>^n?8Dz-\%| C/ i/c2_9}G2HGLT|UڡGܷvڐpƔqrAP2V[F jiAOsZvD*3jR 2N LBxP-#5Ǡ\5\3' dAi-N*ZЩBVX<;ս*m(RH‰ c8pSd`#Tny =V*3ku1<(pByypA YU-UǏ'*|U3rre،\۵g+<ֈ "jklq;3 +|ݛ_[?߬G.Ayv̴[,`-?¿)򀂺zIRa 'vu{&¡>jw ar0bMaNs83BZM0 "9u: q;|n3zxwvuҘ[p#2Wmu%A4yBRo.Y6f۬*Ք=uj@ x~v/a5r n5֟ס1i!t*q^ w/קj'q|q]av?Zs6bə!7o7o㚒vSy. CY^ϫQɼzUFСTYk 4,*1 qPWqfAՒ24_`V Vi/'hZEKYdw)*V SL;L ,6q}쐱ĺK ;w6NAnߝë󞋚!"]s:ÝZ#?M2N%0VWZe23`iMF2_F:l)J )9VʇfZ 1ԐRbm=;cϜ=։=~NN~w"h)sX(qUH{-W!0rQO:BZB 5)؏^((k +L%&\ :=T) .ZטYsš+T|T?~ۺӲ#ϸ!v2D?u2ϽJbyG4R`Q,>n; TiIIAcd"FzݹӪY G036yL -E3FŋB3Lk$(KA Y"4'8q01a!d Ph/NI(hVh{œ(F@x(A`?F@ n $ͰF”v-J0AV؜"˄T9„\h) VTC슙[.WK{Ǫ0bxa$ a o2aJR 29e4RrcA ƽkԹ'jϨlJt_Ɯwp7zNݿv<=k EYC}G/pXȇ>JZX>`? /} Agf>w¯Off ݧ Du V/ߞ~gLWrg7bĪVU"t=@"*+H B[ce a5;E7XT:h%of_!'x1@X5t<\ Q+\!r>4§8[!nau;C %, GyƱQPX:~01-jx^E9Jٲo_= aG@^po!zMu 0]ˍ>^Hbt0:%" xUt–g%.q/?/9>]ɯ.ξs\? \뼖XqԽ{qג6N,-LXֿs.Ϙb2mL$:frJ u)xwMp%CNE dQ>BNP#p8MG 3Em9W7ITcq._ R#dEb4$IK|4BfVC:DN8J0bxXɕLq[촤@C1) Jqg[d *Dp6oY@F+&Mm",*=Ep%=, Fs'+zYGBQuɏ#uQEt H*Z\n({&L3@F%qIqrQ*R=PGq(}leo-Q/w$&ɞrcbϐ#ͨIt/}_LZ1qtI>"n(EC1L$eom뱰A/B=w.z\a8ʇ{[u$;p_D0J]zf0#'S)L4ad3 gb0AYVGmLTB|(3B9tm{\Ӂ (⒍8(:pm3Q;׻KJ}8fG@(Gr#I[a697xnցr〮]35' X!Sp{/s$+buZ4Xx ==NeY@1@TA*E #fZqQ@TPL?r8Qt߅:?,_UQi)BE^I5JG_\Oᣰ7?>xbzi'XP %cJĪ>(9\)Q'qU!d ~]V79ZtB{R#膥FWU[KPk Hip0J{< -) Pșui:4@gXo?ޅ$x; IG&{ӛi񥛲IWe/c_egTbT2цP%eNe\*jH,ڕ$<;%2MaK{FCY*@cTNw A_B}c̴SF6ga38΁ 8 N*i.qT_'o&&7Ì]8/KEyY#A m,: o_v m%.`„գ Uo:TOQE9<"0l\昊@\aQ .'hVk ܥy?wmChQwOJbnpRL-7\Kefo$FluWj cOAdF̘D -7iA HID1]Z "xA8[qƑU'>rvZ, ܓv0GdN">َ*b|PxhRU*hi9 SX& EP򋞹`XG )V-zػm5 gy/@j2I60Jނ@WFK::iI&6M։v\ R?) īDn":h5"bxvuwk!RTR+{eyYuvyЇuoTO9a<(9a c6O1IDӲo"ɔwݨ|/t$w(N3Cw"Qk̂1? OFOGSB ΋EljYWgDpՕOAp40:m2Le1>ͬX6 hݓa`H2$Z*nFHt\\w<>NYiȐd,V{/-ISSgTDuhMj ;uLf<AeIG}`ƈC6OO!7*_Q @!jDDLF˄&r.`,];|--)՚VȎL`j2;<$Wt ;9 ]9xN2 "{̀}MbHIQf)%z-zjJӽWsxzFɞ$'ƌ:$8UH:'R{TA cp$Pm gfO٦d* @ tr\d_&YJiEjP6e!3Nk8 H{ԟOn>>Zs!᚝UFWY]eatUF=JOcDyѦ mbVROU dYQZG1L~ x% G5ƃ] U m=0ѐMgcױ6o`Z@Lt$k&.0`6b(NDq@H =#j`L FK7ڿ6"zG#Mg`|3Ьwj &.u?~we*«O#5/pOl73V!37v2r8N˟}@gWцe.OO>\  Oajץ~qEL+4^Afzb묆狯\\ 57 wXKNhhԉX:h#CĽs@"Q!R~_\_TD>5 #9tq$DKP!YVc#䒃H[ǘ wdg^)!\!r#qj4!:zuvkpmgkkbOB_jg#OO35=:o虃sg5x5aQeۧ#I@eOJg%k4I\ !8Aװ PE89pmJrqLJuԠINp7UJ.d@Hu?kVu-inuhJ+!5.B\JodU jܕ>.9E cӮ iDp4㑽mTw7CߒV T\^یQZmCp*XQM|)S"Z1ʠ)2g9fIכ1[m1cYqD>ͤi%c pGPAJMAJG!mWEM?0">/*E+\LG!Ige;Zs;!qmM>W+2;ڂ̴L>!t$~2B}n'ɳm:«}\fxbCC6HNEoN@3S_*1g}_yWJ%)p:hLCvA5՚=E4S)Ez OmP͆¨}YFJ¨Mr-?mZbGUR@8)0B9\$ |F}g*:SfʰZF\AYs0F,f)Hs>ܩ& UqqCϑ$q$]&pfc땊))dqhÇ8G#Nskh ,\:rRS$ .Ibc (C 5 JYDh Z(Ni NN'z4"H6/BK0%+IsKMs ET`ZIý!ViF$g+0%Ӣt`B2EAbN1R'f2yBq3K, T' >zɵ@PRL) .Bܴ6/XepMtQWYq,.* 5z*+dPP4HbJ DHFn9oOi%NE2!e'o!ڊLT![*CQ[ oRQG[W7G9x5c 0ʨ7>d`>@)oy" q-\.j8yϧjsnED=۠U9ʠΪ#hdZQ{M 2ع{a|"ON:>vUqR!frZqgˠ]nVSĚj/f^Bl/wXfmfTsy  (9,J 8hLE&h'BZiWf&kBZp'`J׶"Ji&ߥR `Zn`?IyYa[RH8>lxEN.hDBkcSgs Lwu*LZ@V,|xoS~!_$)cJ2W.U&Wq/n)H $ĸ,8%(:`x6YrmT^DR5Fzi!`;r[DžnAze}~ܶ6h7+7\d c YIfOdfOkg*N>zE1ahobT"RbQRQX"k G ٗobv6@skؤ4J^ܗ@ߚr4Y#Bya`ϑlVD3óI?e,o@'IH(" -A/Vl H"NjtU5s\iarnʊkTh7K!fF 8عuJ!ek4W\C&*%h15?S-QK Όȥj'(5ZqI7D`QiV04+wT֌++k Z@o/"jn@HtPO\-jiҏ®M>Eze=7jďwŇE>uyȘ%E/hV䀰-+=Dk6]kݡ]nP5c0q__ &o'Nb,3t֛" 4? az}㷷VGt4wE |Ɵ1-45}0a{s-[0OSYP@ֲq4Ѯ9wuaǯe;gj2^b*siл[{1G8Zv}GrtN>8`$EWk*"v,i TqqKZ&h市]S8,un Ah'Y dߎ0xsU·סߠwKpi Ύ >0v>Wvz+8ggTb0iq+&"ϹOoo ޜHVFH4"vyn.3;:kyrJ:1HJ㭀d3Q#BdB-_ ZI{f G{"'tpl)v1۵x9A4?$H=qFDPqߊYf  #i&ORkDV6HՐpDn͖_.ϚɕS2PY/WX~&2bS(V@T QRI7!3MITiqS˧M`|S: q5r!,n@ A{*cv$%?NDpk8dȹnt1&}+Ǧ8m;~[?e*rQ9]s4_aoexo :ف{7̽Y7;?9~8uR^hUz1O-5{>$t޵S# gٵl\[Shһӧ7xfv9((tA/  DamSlj,̣= ^\c޼kzwLJ=\?.(drvh^0*Yi|K6CX".ȧd0*`0 j"%\b&EHJ V`o=1w3/ȟw$Zx^k@fu#8 Fvt6`z C@a#AoNrލ_pwmq~ ˏf],5\]'| :V1Dsn0c>#QR䘳r%x-s鹍3Ňap&v4P׮Ӥ{;''BmB_.>H,-.{yzwsqKzջ5*ЋK!]C]::/zWcC{'J>&0-td]Y*F-#ڽ'ZÄЕr .y&(@tBHOϊË7l>TZzܑT.2&} GΙNZW!YUDe5J ¢F2bʞ30RW4g%c-GM?)YR"0̈́Aj&6J6m[o/kN&>n`5G(ѐ*3:e }0,uAm 9TVxUbb79T0W9iSB! > *k<}t愿vB3-S.̒XqfRT:qu:wNFj^ݑ}pjD$t~=|~ã{?@Þ9G8qJnf VN8!]-9ӴlYsӻkȟgQ&l6r +b}iUx-W\1[3 EZݭM$Pg i)zXo(bE;N)vKb,W y$TLJH ySq[oL@=)&^8Y 0TI2Y mD"L6! *32Kщ$E4yNq͘)3'cb=Be%v%NEG.Дx0I/ *!\_/Wp. zd"r>jO)a9O^BuU帘Yd HKNIO,0KU$Pgڣ|@n.d9# ɡL>T!r' ǙSR1T@GAGBs9y3JAFDU ) l'aǼ*Q:;L8"JM(KHYfH`ّ ښzi֢fÝNMp8eAYH*rO w"-f PQyK5Y= LL1Neg2S K+25m H1c52)ɆgZef)7VrcU)7VrcUXBq*(2e7Pxi ޒZ*zpy[v(\~r,r}KA|1xs{w|V س$r]J R"/UdiRZ95vym[ڼcc\lDb mAcf6oCMkq5q;暸} kB=sV&~AAny˴Q eȥ*Y4YYJRY ik hV*{d5}8XKa,I8?rDԕbH‥8/b=t@OSn~_x.%A>Lg$zw@TO@.~wy2}=8O6}~ʪ o럹v<i'~| h*a[ዛbRR-[~vҔzڔAžbq3\al`e3|ssW'>>ӊA 9D[GsPgȭimN,o B*Ss'N vR܌Rj`b:"gpF14"l@\qcnQR4GW\KnLD% ud<|p fMq qXj+;p jcs %7P`!Vo`--. Xn˾arN)m4}#hgҠ #18X7D"\joFR1ɕ3QN`Ԇ5 5S0_Z$=L|^e-ތ܏\к;A2Q[ @CN=¾J3X֭3>Wq2}D]KKg^T9$DRD_-\9P _mgl3kG=XiK擶|K>BoKnxSz!sVh9w|}_Rf׿۴ZgP hJ>B5;-ߚQzl՜nCic- ȍ.#p 0W獪V%}c$R`t E-L)\hlFh8oRC~A XnU/ÔrzI-^3!mQ`oFh3rn{B2Z. Yk7IS+JJ.\%K~C͕هC$m{gx/j/ AOHpu̝ &T)gAx:P^=ҤqreL.r!hIƢt,d]$,QB ,*rBNmg#;+5s>IN(y͂)\)ҒVˡHyPH-(4 MifTwKg5X ݥB-KoivB*%U'tpL RʘJI>+ lO`0n/9loh+mLoIÅҦh+dLɇP)m9)qĕџ#Mkxja9gJEE=Lf00sjϠfJaQ;>ksNur=tw_.z0)(Yc%ϕcW"HKf!nwW_=6#\=5O]=&f7\20`H8o^my-SyrN9]} =]B惋Y]nޒ'dρI`MvB?+ x$v:#F2ɓ**T\+m i1v3[iDO0ي;qo߼nކQ x G @ L]Ua9nѬpAQOiPMOF #GEzO}r۔z]^+'=)R"R3 1$!;2c%YEyR"WrA`p)f -l[vno:5dzЭe*i=`netolM>)QaD3'\f!0d>!c-h.J<0cqZYFdm0iW46މ>r)Acִ|1~q"SY G^<:N>я.=/y?>!aط;NSj[ Sie:i]d- *K6hb SqeY0raɓs-I1q<>L54v\]1:۫v%Y۞^ ΃.fzW6}mVXpKLёѬQ]!"˔ɜqe+3amEQҗf#~Y6{f# >=żl h'եŠiTyn<ڐc*)z'b~L׳ Sڿg8!e/\D]%bs1" grխKN^"h r{u[;vޙr%lgߢ %l/ _4K.dd\39n;w@ 9a66,w$# `$lWT :8EIe_:A@~HEN[1/`Z|_4{~;=hqobB(p6DFWk' J}`$pE]&d@ 9 &1$0ZٛHepY%`hz[͐3>aV}((UV%A'i2BE RIhkI:L7=Jl0N%3&sƃFפgrQ@$qi\9| ]Z2Y3ZaVǛ rer2zr:zq%,Hn)Jqvv\cu<.Kub#pЌ a<Z3Ǵ;gͩ2|UYj9#A3BiljDgyqàdR $r=]E|{Uv^igѐ8鑍 \?l^Igws]Y7+~T7CI311;kdϼء@(5-6C7A"ՇC,_& Y2T1/J=C/gn lN.>(( {; lNCLTЖehaluC4o@(i5o P[йR*Uuⶫ:wS3G*D:>˪Oތߎ dzd ?]{֦¼ \p{׽p3so?`O&qK8ȭsk=tq:Cd筳acLZLfi<0.lӭ3p?عB^JԚ۾P8vlԘS:xg \] ʱ}ߛ-DN#@h咡QN6< EGdp=\6:{>ڿGn IlPU!F*xasJ)0[H0cc} {ێ, VP6zus3l^SVbj CR&'H9ϕ)] ԤF`Ry BY!qϙd*`@&9Gqa H O1$P\fڀ'{_>5 {zA,e_@v%Eɾ ++FLkBsԶK]c1QF)vZpI߻ラ**DGt,{{0Os{hh8'p-玞XPqPO<ޫD:M C&;B z%la=ՄDJ'f=?n!ߍuޢZDR-F%t%{O5Ekɫ|5׈GurS"3eda>Fq?[#K38HMD{?+bvAm gqɆa1xg3O;?cf@pz8/8 t9?iL&û0h7&]Tv]36:8؅9u\0I3 N_&EK|g~#0V6C9#7?׉󠵫?/n"' )V?ú]r IB-vSvAb#:cTnev-yڭ y" S&&]nĈNU^}ڭ y"$S uޑ(]nĈNUでRݒ'ꐐg.g#=fýMPu!*ˑ4e2BB;a:,_żvcۀ>E+%Ga]yHK%/C7'_s?0F0kg͈OϮ~F(,xԚ(.\x.Qb$%`­l2EGLK=[Jwl;m1TvqU45-1x( +; KO}S(e1W+5&[?]D:i i?XG?yMٽ.,ةV{AGSW}$@-OP?GUS++hCD B!tOK>ݗ Br; 6UdC1O/{O_e RzP}ѻj)ltI'[,R"u[S T]-6~;Eh>;{֚י4[g~2LS@yz72İͫ~hke TPeF SŠ=cV=Ӫp#+걧zʮgv!Wm؎KiL)/Lş.gY k_f˿zܽɗ>rol{ᐳwՌofY&T)fD} hqNJK)P;s#EWkOYNu~l*tbۀ*SDŽ>$8u4|l9HI8u6h-QLl'hU۵SE-+v]WqHnx`5J-Cޮsf4 .^Q`R*![nR *J^IPQDT")s&Ȉ By̴FBdnhἒ#{#(F.OJ۲"u1O5#d-_JF喩gnfxƳ LgJ‹Z !<'@\;ф 0K+uTkK}$Cۤ:p\wfb"QǨT3bVW0BI{aZ5kg.I2XuvuAb#:cTngO[~=ꐐg.I2c]f/>_J(b{$g/Zk' \^eįE*"Lo\\Y|#ª٦ndXI|&Tip#|وp5x1_ۣ~#$#BZ3J0#yt^|2Ltf2)!2ߐ0bktǬB)A7s<͖ |I4|з ޜecI 6ێi1"L }o4=,僽]QW)W gǵ^>670y&pSHFxz|2Z?9s3tRʭ-r ;SX5 f'"`fL LqGFZ뻔^bp#T_ŗ߄#>!>#t*$DYK06}eM, yQ{{񗢫=@M* .7B{ł"H{/O1hwwV{)G@"ڦZbU$S!KGT9/ ˵n_%\O(nй Z{Q `{3cc}I9N\OF2Xȉ xatعxwtK]nnhNGJIb* *^z`H!%R-M~_Si}kL$" F!N̩c0a.5pQ2@gڰ֔ƊZl@»Vl/&/.7^k2Ό►>qPB3`_r{6'Ue+yjFt/_8쒺 J )NWm B W?W3L͉x(k$im]To$nضIV-kZR."IUס> S6S̺.(/d} H--rh.@! ^h*04$O`-זJ" . }`m\*]r4Sivg_~` S-xnIRvtA2 ti0_'̆ I vV$RBל:}JӀ뫏䞦m A(t1)0ǎC .'tQa6Kj4S0CW77; >&Kp}3M}ow0C>l?d$%7ܙɿ336>͋ /L\>A* zA4u˰af 3<ό.c- <SaGK>%E刁 ϐQ_3McG""dUa[Zb+o8i EsѢRRČS7NMy;389 9:ʈ1 kg7pdU'+|%]h P'0Ǘ-Ɋ HU5df{m acY@ÖPfVU(%qhɵX+4B V^">{áQ(#GuҀ砕SͬJ˥%ʃs8-{jppylZKEHu{PXY!\So6X˼gF|C|w8`MɚčR#"M V9)XV59rɷL_q,\w&f;yCɧV;p+ jF]U@}T xPG(eCI̤Hmq8iHsYOKL@fKr8j4ǩ[uOxn"olFZ|iE=|Hz*[Mf0ef,WAHU0Zȋy$QFiJ~FтoQ8Gk4n%h8%{ n-@^I^׸KS~K|tOF.qQŒ;Nͥ`B$Tq^(*STH!zb*%Kk%Tbݾ:fwݡe5K =hm`kX! KYh^xe!i}E{ Ta(K]¢D$6:/S?֒Ct 0<]鞛-V Χ~|k>~#>|~Uz`4BQCY˟7W?L3>__8!jLh͇pwh[j%e*{wׄH!L/]pvK&\?l+/ÕK Zq,RpA8g%D9 R8eCa$$U&>&̯P9(i`den4) _thUx{gSY;+>]Ss"(5c%+5 owѿikC  rM6[! ]CidlXmCޜ>O'&TۣMD4U CAsə6햊A褾v:xQ-~vkCBrY"^[` TAsc#Cŕ5Rm"6C\a7o߾Pͭ23,Sm/r>E%(mC'Nc)S ^\_2?u7?~A>.obטHb Hn  $G7/V0KAC7_+T|Jr|Q'Ŕ<VDo'_|Rl˒-ըD5 Gph8)CAGJq2+k'lxJV'E? ބh)>??}7Udihv2kG4RVVe[qȐ3՝nfe>F]lz#͘`6ȢZf,F]G F16[yzxqtpP^[q;6wn&^o 8* SDReN.7ȴd՝L1J ChN my^HWS[ ( \cRM!RXWjPVmA`n~z"(?ZjSm4qQA7g;8zF%(u pq[Yd~N\(7rmV[Uzn7<:;V5!8@Wck1I(IΘO^آ:,kZxVt%.'T+sŋ\ ENDI0)-#@w?+h x†b;t .Im@\mSJʄKx!^aؼ?gEMڣV ]'ޜ~Ju<*Ĕ0O}G*sOk}GVoCBrM)Cކ_nVc-I}GG y*Yo-d[vkCBr]$제څkmxZq`zVmwnojw.amht;dL ]f-&G|D¤uC؍ckB4P:THtŭy–>] N\Z6wGI;^ C>DR*tz_,n>G)od mopJ?~:%*O2"\prQ?qY<,@eXmU{oqwrcܱ k`t(Ƞ?2`Tdz#c:##{;R6$h<VQ9Bk\w,x_c`@RIe^Z@YU9/ق'9 '(DsaZ;T srtnPgTK'wxg2mȄ4Ӝ aJk5a/GKby@~z?O'hg!P>,OԐCgIϪs̀N毻 U/\37NoU7_suGH3Nk HgPg/v#mAOKڕğ"kb9Ta 5HX k薱>O ֔!>`ˀTZfj [W'scp -~"#%jA @hg{s[h/5T4+LxB\1srڛ[K ĩ&*q!5Qy¶&*Jhs0X!NdֲgO=JkPn_Z_׻OZdPf"Ë/>z39<T VJTYMp+u2_f*(G Bq>xF'Qy9웠Pc2##+ P sǬ so1,ks"V;3C$قY[nlz1wZJF-R%D{IJU0!%=+e:YUMHUi"u}6N ZG\M|p9U~mϭd?.;(Ig:9LMtNVX=IR_":%(!8Ax Ζ{R;JwIJm5TkNRPfGlv,⣤ R9"p-4ZRJQXU"°cԬJ\rZL,WZ27Ь"^2Y2*LBCkfC[5']峚.%zRX<3JLp#ƶl~l~o+ZPTY|kߜfZ%OjDf0r J3JjFmro=6T8(=H҆o(g":/^sQQ0qM*3MI-Ec\pA]]QiznQ(%织^q߱wo>?]0m]wa3njB > 5d:[@^4W_{tfv77x} pm?,o\3r l:G7@V*ˎ$;I.(9$Xѓ#" Ÿ0N`-v%̳ ,_bm &&hbBWѾ3pIf7Cڔ3%>JWvlN!l8R6i)U [oX-%4oL^Jx <;PѬ٘婸&lgO1?v^5z7/, F LgN60XMtGInU[#& TT#L$*pr6E Uos ɂ`p+Z6G|@AcW*<ԇƓt ct({V y%);#'ˆPl<ߢ1-z.ct( }`)m5So!} lͻ V${ e:|֏\NEaؤM)6SU'N,ॕTx=pt&|ijm Z5 &J1hCa!A<ډ& .$N-AٯUڴƤ"E[ Z3F)>j.Ì90QC e&,SA+~2$Y$TѪ Ґ`v %2է-|Vatk%68ǖ: Q-{'TP5Ojm'"?8zYqU=b {#iZI01$$0 H~_jP[zN.%'Xv7,QΪS"M3}R: Ø&-ѧD)4,Jh͎.{j)D@6S}rp<> @aDn,3 $ }â6n0YZw-;n9[yjʼnh Xu4R I2>(ݽ|L"TK2%ըJK фBN>4LT, {2vZf$(PRFy+[ې Gu$r k2}I-Ԗ;_x\>&7.tfCqq|KvU2U>t{ӇW^` 7w/Y>!Soa,R};Y1FҺ: Zgwxc|?jEU}@>M+e]D Hf<6b݋ςh } 0sl)U%m%zM}Gv=5ٯ۪譀`]9BN~kU8`֖B4L~ki!Mt8u&DzT3&oz#Dom38uNsr^0Gޛ[ MC[ pşD-9t'8 ]yIy$|z?<];^Y Fy,Qfdwr|n.goݏoE+{|V F RPÛBD* Hk2+5X? ~:TZ(WH /T Y^@N { o>`tE3ɟa6gBx[gQ&ShȊ??N]o__G ˿=;`IdA)tX3=(Vcg2 isd=qPL4CtH_tב[ewRH1q316ҐYw&^j7fiͦ d)fi`ʞHj Iݹ nV86pc15BLb1r8ҵ>}KWq,z&Odv3^rﻅi\~Ƴ ~2xS4w\0cgha=]+ߞ umDwNV좽lu͞QSRj j(ElfuHݎFy)DZ0᝝^[f?hZt>ϢIR-th {GټcpV򺽃#BpHȓ>7dָD5+Zc B@7sc[F]@k癠L/S23ʃTT\J -UL 879cRZ(c*HNX(5A=`dgL&O:55#Y_GsSz㗼W@ P8|A1"KD` A˝YSoiQ ƃW ;\pV\Ⲍ.ǠB i1^Z5BVgǚm#t\ 6$5YKE4,؉זǟopJ=o7W.nMAoTFxV-}O?gY1Os9~J! d?߹. qlw%}vsO*쓝럒oKn Ta @77+>pPPy.*EQb1NޞqIvޘt.ZKN&9P..tnnW_lRRpNiASPqHrATzH@q8 vuLw5mET7"hx>US %YS`~߽헀d2ݠ9PGE`~tL $vb "O|h$a>*FycݙlΞAzɵ]nynsnBA<˄$ͳT<sҭb]e\KV¶ vGw& ct  HID3V7`0 wԇ(>drw'[=Ӌ4 t͓i8j/hPmQ؄jJf,5V- r>%LǐtDVˬPVǨ50#6ӥܫ̞1VϞm 6hx>+.HP#dQSe,<R}%6>HGUhX]Sɘ$"nM5g(ݗH+ BILp Z`4:)d{rAzݘ4 % h6Aeqi+ ]_}T 2zGE F쳫˙&:}u%b15i;q8?_S*U۽{k  gf o7w5LJXy騍RIAkשR_֚30LjM{ۜ}譺VTu"ZP>0 I,ӫ UVF zKczk0 #>lY1FAbWԇ^gG3tjibAL-Dyđ`# f (X,bUϚV 򵼬As,ӽ'7"f޶[P~<EPr!hmz{eYQRT.{-@p)m 5&j[0w%7Jqè}0#%-FBsC IUt:;BqގG_xzgחp^͇~}I ^hx]{rv\az7> f>.ޔؑ>ybs "ӿ\cgzMy > 7h3Rjң@=Zz2wPμJO5WN3X"UZ 2Gb1 ,t8. ͜baʭ<fB:W fs(͕2 v@x <ɹ4"gNhDlށԖ|KNju5 gsV{9g^Xt 81^ҁX#9)4.vrNG$gi7'6 _*Yc,(/ N$4+ު@BC?{Wȍ/=l{N/"6m>_ǚزG=]_QK[,J-"!@<[dɪ"8DL"1tF71]+J;۹[QZ3;\ܟ J$n*6J2_2o6п?Y`q?,tr,<~\LcBi/~~8@yPy} ^WSˬ5ZsKƣ $hp-EwT}>ΆlaDZBR4ڣЅ Rڈ,wE qJ"< \#(AyR ea+.I6ƃay0ZNPoc~!k)i$A|c'Jw9%7%`6bL/z \V9m*4w$4D2L@D$ qnt3O~2#NJi~,y)_:j1D3Sic%F[$JC eI-S6'ΕaʅUdvQr@ 2V8Zѣ*B”;44%7Y<@EKyM՛}J-ׁ ي>3b'ڕ`.lj9Hc_Ͼ7ay%h 8rPn'T(B) 8GVq EeU^j:UWWe|]SEGKiT2iXX[u rHe\{5cZP3ViD4+KhYfUϫiVݨ8&?9!B[]~^1FJwJYǞj&ؙ>fAĤOpdyZt~웲o,:#(Ѳ>ŒItRԂ, !i˜+Rd ymP^|s$ rp"J*G5p*dRNՖk澒N}3CYe9_| Uϒ>"}65{]A:Y&SK)R D%c^I\cSc匴o!#v6\V(bH=:][1c7m}d|WkIғw'FX(ͭjrȊ^hl*6wpD=*{#T|C_T&F2zo.u|Rwښ3Yw wd_2g=f:MIAY)txS S^Nߟd o-" `o2OJpB{R{;w\-CHvJ$# *9R UgWLngC,'??2N) gY`h8߽\g?: "AІr*0mp ycdXnV@ Rf!j(Kڇ}" !yŸ 4텚)M{?O9L NrR$Am^<ӸvH~q$=MZ_4֪"WԺ_.{4Dhn^'տLӧ V_JH%D+8~Nߪ<>x|*!$E7OUXv4Vd 4UmM+wq|/|03 kD32]:`Nqet+^ Ca8~W2Tvɒ}ˢѷl3h=zhAdzGӖ~za^IŠѯ^MkJ^G!=j:5kηcl/#n.:i% /ϐcj ~~B{5ʯ&W2ҋdF::.Adu)0EG6g+❂v 2i @vgShšβPgåιuj#nxgn4,KygjئLz`I-ѐLx`K!P>3M6xhr.ҝ?e#Lmתnl."pnX#9] $"3ܥhUJIOXQ8i/ EB)VY回/ Z]?@<0_ V x=hWoyWvd?gS^<}?j>Gow,םǏiL3 g\wpFGtN[-w]2 ʁhT{-'@)ٍ&B6h>g؟rmABʿo,{ @bzrLOYEuu04|s56jD2.?]5j#5WcJ"B:'ΣḍQjlYATR(%H7.RkrhZQ,qӎq*~8A֝Sp.Tdj{Tbɠ]鴩*JCJx$8BIP¬L&H`A IQ^M;jM;xs;0ce 6aLui0 *=Ry.yKaKa>a JHPGd{i0*\$ҏHIYrhN tj,!\vLvcv=1 k5#+|Cng7P_=}‡ߵUNixzǶB; OiXɛ71yR3//0ɺw'$1e=?An!%ZY/Sy/zVLĊYsH{|l?h{E H8)0-4Nh.[t VUd g?tSh  Z3YKmǤ@nH.QE+cqH`TKVZFK(=ъ@!PGgB ڙ|;Ng.# 9pWD=iT-&ŽlSCY6yq&} irEJk p%= /d6 tk?tS zFKB7,ڤ3 'I,WUN0 TSAK2DΛ~տC.3ϭ3Fi^1q#-DCّbHa$>U!l+{UZ[df$ ʖ$IuB$d3WiIv@ br2\hQY[.*@@rrIv$;(5.7KTKԌi0Y%YCGWB=ѕPQ#A(ߥ" -kQF uM,= ]¨<-9/shQg2"$I&lE,k~n6:e,-۔E:l:ɂl pA Y4.X%5 oIS F75ߢ G5=?, >x2 pw. | \n0r^ywիfTz3-ۺ @wksqW ϴ:temmGY ?4DHM EGEN6FCfj J"x&o~^2!GpJ5>=(xEKÕʄl4"" ytK;[-W`ͣ1# t 8ovp$$9&N?,UpX'!wub q,Enȣ!NqM&.(ң>Y!tePޥƑ']D2Ѵat-`*'0 }o/hW᷷Sq.]}g H+?N|'gD+ )F&c{* oNHMlxs?U}Ǟ JSLOߞ\W} !'drZIyi4$0!;-戫JLuހLlx>iQ<?J3dd{}TKjP9[by`AҕD$tDۚ:Ѳ`5wfhԥTpZ wHTbhk:A;BXb-2 (Je#@:9$EjIމ,9N(8nԑevSJ^Wr𺒃sjd5kmEfu%zTJ]o&%Hi9Z9yqj߷gH3> RI2`%݄ jds`xhUdonF?Erߋ^<8)z鬱 WsjWBUWWwL^ih\{SYV4,=ygdowEw4; 7o{U^gKTEϔg'CewImP&erIo/N7?wwwrfwll!lYލҋ#S`7JM,酦lɒNa Kz}OG`I]|{X!ndŞH_e]zxKX!!wSXÒ^ޡV9.[mmbGWvaj]UYy@l>^_ï[w :J,沆IB|zEC,_ ei\]av>?ƢxE45spdLˋ>8?2奬8s~3 ]׷9#ZÙ'̦}f/*Fd,uP ٪?#ÞD\l;N[g' *QHa<"4ܱoK_>f&rryDɫ,Z 6QR~ JݨW[ oMMۜ(9㌿繓#YZ u:XD!.|plH9qd~3EoN:A $~0RKju,ha9r/]x=\:0gC)&`%[#'XT:`I]b_v@H!:ɧ2Md\%3m&g!p͂(q#$2j&o{,ggQ=S3RR2Dő)L%ژs]3DEL`m}k6[}yL)!@)jNHD#L2jeY!E*?rĚ޸G{|ZU}K˷lV׵b;rǑ"ULgFاZ0R<+~Ԉz| VOwjNB71i{zQ3R n`}rְ~F tŋۢZ܃Е|W*0bC`KZ]f8B]iJ/2#_*z3ӳi_(կfEv F ;\G \H0mί;K5~AL؊ŸmA'@o[O#:lp`b޲4sZ8ЁuhS̜x( ,}AJ,8n?# SE&-ebVF`IlDR$hy'nO59GagllͮҪ{[ۭTjS6u֭awVafYw/Kv]Qd7\ypQ.QM/W(TG*J^`(nΟ;3ד1VEe8[6J]\g_!n7XWf=; /׷%!修*$@?Qu'WӒ{o'Ƿ=$~9eB:~>M&r_:c=R6vsyM1!D~_ T]X9 9|""Sw]5ȎBbP#:J Z5OV!!_6ɔGı \|@֊A贾cv;ܱUi㮵[Dk2$z7 p&$4rAog6@5qѯk3J{w)fL{=_Neʳ.(O(Qa]ܡA-S˜QHϦ&Q?*u4,9NE]Sp%c0)Gl)xr] ${}7:+]'+wr~UOwOzn TZ&S5Mқl1&w/˵w4I^P I6.S47HOYe ,d|d2 I('" $s(de&17$ZZZOr `} ?WpalZlzx.Pr;\FepdҧFyҒbCRR,K@;T iesrQHaElDž٩vn]Cȷ`F'j {|5TjЖL 8^ ;gkMuQ_\=~8eE\?Y9NÜ2|p͐Z ݝ4n4V1pU=q &}۫{gz2 hQ`s\B9%Kw)Jl)#uQjZ$MqwLFIJj̹]P*XA)R1fL 2f-ַ~.їlY="fsbf|o-ߊ;ӄ^s>X zT rς~J~Usq)v*輗tQ>;_ X11b~bT-gܚcLAIG40b1]Fl}N50jf$/me#Q{Xf XxS[GsۣUϟ;3דpT?-\Ϛ.] '(uwV#&Z+ԩh'S*DhɩKdk$e.Jz2&y.6mB~M]WNURz`3돸9xĽ{*HPYω5 xنSgI2;SqiS}4ΥP)SKI8h5g Z:h $ǞٙUT搱ۺ%;D2@ (p9iþȞ{h=6orݎ, bCmwpw\sΕ]sK0ah|dҨ(.udg&!,j<=;)c^P(@M/lKڍ($D(y!d~I3.S<Ɂ9 RZP)vBgx$tr" /kt-niDR]\nkF^k<|'=RIhk~2N Qu)Dv'`!,ƅcV)`npe"%U4uɀa,&Kj\YSDEє`8ǢgJdrHwͳ;8B mh1^o}=3줹OSdƳes^s\ۦ'}S&N4>K%=<zs8"ӑrC'w[ܘ,ewe>(b0%ʛ :c'l) 8 Hy^k)$'D#xw3?LSF8_u)3{\B P4LJE"c|]LV]O>LaoUf?C4C&c6X sx)#TM^%N^M1" !)is)yv/M;zNJN|C"2Xf4w3Ki685Ԛ(q6Fi"aЉ]:^ZϮpimG_9ŴxYO<옖km$+o zqn2[i9gJM{^nh!6 őh<J9C($s#$ ʯq+@tf[v9g!CThn(\ȮXx6O]]5f(ּV|#mՌfTfv:;T+4t;Ղv0ۉLjuvwߝmw6鬸NxOg>t,. ,ǿf!L0Zj7d=#^? VH]|x?ğ:cs$skVta> bT;Aj46rĨ,R3Ah $f .z/bx5Ǥ'[z #:P BixKS1Pu$HX$֛+!}!)"6c-TI'gMH`'jEsB{H:׫:Fp7$@-?Klڅ?W-҉:u# YFטgylr< Z۔2yr tYJJngٮ:0mjCuj"HS5a-@,h.< !)FZ'^#-F΂о ^[3*JW5T_PKr}߄Tːx U+]Y!,I8.ꨘ|lZiR?ֻ sB8L0 z964i ){%g-30ɯgm ˷!z(ɛw?|X>\?~B{U WRU,j&SNhJfjvpJ̈́3 Pt)ŕ,ZZE𘚉DS#"ҩi+MeXVeKuJ5ʻɛx7c/-Ϳ.B[o gg{.K!_ml7L/MhE,Qt0?NjNPա5$u*X37SG?/y~{z<:Ǿ S,ʰfl 3m0 }=33"3pPt4Clg$KIzHS;PB 'IrڛJaYk*K_VXշǒ_V40|a|c,7HRѓ^֖D&`{E:aIq\p?eIf=?ލ&_l@N Jw"݈%B2J!ğ)4XHQzmh֥^5h|4^-/ F;Q#<=SJ9~I)?aJy c0ܫIŭiAˆ਎M0E~8ਞZ]wD7m"2^?(`,ϓ>7072ĒWhREWtq5ŝRM?ĉb!Ɖb'Gmq{c%H sEꅴyLusWY'+/GT'y/z5Y#D3Ķc`ok$F_$Qɔr饛d2o!L&Rʞrp!?"18:-W5ga7B4g^R1d?sYkm+nQR9>e8AÉj!^'xZ)Ւf(ṳ#A(Y iURaLTwắM5)~!ݸ/sVY13-'<ᗅۇOSI r0*~Q&゚]Gü{}F٦rO߀Z;_]%r{BOO_3ɀ h<\k+4AS\Y&k% `~{;9{xBn&xQ5H-Zu|3!iKsN8'dcГ5 QϮ^=eͦs P(r{Vx͌|'Y|cjxa2#YȰc°@Un*W#o>-sb U) ,8=x(|,9aIn 3{-PcFChʫRÓul[2,9eS"ΒH^At67WX~kt'ҧ q r+8:펳`χWy[K)-.gapoh%պˡNܩ[SJAt{!Z2gDs g=\: j^эS@߸vy}cE %wL=iJ(- ovľLMa_/Kͥ}c&g$rFM&Bl>6Ի67C&[mI7Nl-@nįૼؗ_8^j o7mA ݜ}βyVu~_UH,UMfT ߼H:u T.} ~5TsF,%bM'Kiժ[~lȗ2@l&񖜖d2WQ1|o.",31S(wV1c_mŻi^13˵aںa9>7b:ӈ"aȘ -p7s7WO{%5?.Kz8抌o֒4~5 M*|dĭ'BY(FzzpY'F)z l40½7ܡ4hk<_q %D8\LS:8|4ZOij)|OF&8NPFWFUcGa_8gH υ@XpƘ3"RZpR*Y Sawi j vM)\ۄh fHQp}&5JaL7)FTr)X7R!w#SJ \ i#{*f N.M|&^$aZ] FQDʛ v~wY|I>&f~~5W1i+z~YvbPYWi'ͫOa{IH$lW }V { }Xry?]U֑(~j:r~?^IRy`tvQ-' CO`Hػ&7nkWX󒺉¾LE%ɷ\brlFf1JtsIa78KckV#BU㹍 l{]#ݩRKPe'J)k]2ᔵpnQAGH֪rך;Oq&8`bRCsqB k% ƝrLZofJ{V]_պBc VLV ]՚pM_dIB"e5EJUES`L/_ 8^n$ތm>0zff_30PVm5T6'Kw˳{{~ zi:ގNǞ&F-ZʯZL)WcJWoaB!&PlHZEY-|A`tJcK:X|ԚvQ~tvSj1`;dN5򷰂/ƒْy|%؎QUk<[ǓE.PζpaɄl'" :]{&'fu:ܚ2\^o!yd4 ,S1 ~B: f7HSS:^;,CPz/Z7ҟ?MvM'[c-zV1J 2rw#xo;YW;®__V\©jbҏx/ͦi'8>$fhzzt:Ń)~?r"9au_!0ONgMu뢁EE]x8{Wrh:&1zU↣uo8Z(Y$7OեQIJɡ87ƷNl'Cٯv_7V2K-$$(iEEP!,Q{$Cc7DBz[OpBdwW B}OɢO%RT-gv'4,_!_(-iΧ MR1aJf=dCe={Jq ~-ԇtm@)q> R: ߓɇ3 wq3|(%l|Ҫ~z)#M1;^Hr<=hM;jwQqtxEh~4Q\`jW]w˿9/HWr\yZB@.6BXH~:^KLKҘpxN]VNF9a>]}$L/#]<qYk:{Rtr*:mXpM i$-6:Md,Qq>@H~ HLEHaQt;y i8Zhd,KRH }:FR2ƥ 46Nؠ ƠqiXS e^WL50^󽣷&5"`aP-] Q::!\b\YI% 'u-JJ^sKX hkje v`]k, E4Sg{U)JT" &i3f]l 8"I\:&r־A5LUH)(<r1dQy0Ca>!Hŷ{<ߣT6%Xf n㻞[W__sfox_PvEGF8 {Ӌ+ۓla3Ol3>TQB9˧{V}"wg`H"P>ݧX2%X F.?mg8B}"]}^J(cP[}"2]-ulJS`jKp14xs SabI9CJ,:P]7C ʠSYA Ul窗ʫt4}~~b]Fi rWmjm/|w M94>}ͪg&DM`Ml]ݒrI$Ԕg+JvXhaxE32l3;tmHoQ;r`<@hX1/gm/5B*88,ln٥BBP3 BMMYWLO^ =EvCޛ;{3}6{DddyvpQbMh%בeH5WiÉNgY'G;T4g׋Y{ #zk=G=<-1V$BPõG®P yî#%&4)N/t)$|oR Ka_ ]*8TDҹ";n2fp2/(k941_Hpb `k'#8iRʅŧ ĸpɧ[2 Z;8ij 4vf+OP)n;f;C _.b²1A_C/xie|{a hv7?n\2>a!dT(gw很nêSvAn4=IL<@?zK b+'Ai2.>y>-}%zM6#vQ}Os)NFSie8*5a7v2qFBKpHR=Cɻ<{QUY9SF2j[V%XHu]U]Ukހ/q*&Qb^1eR)~W@⎰ NO[^Lr֏wZ/vSе_CvR*a&T*^1X GDmm/YHHF)<7W|>+ÑCbEk)$%Dw?8MƟW̖<{8ػuχ3>T=L g>SΏ0p7ZBʣWEE~Kn|y>q;dO"3![x/$ywA>']m|v"K_zu^y?VꎑS't_4x&O %1W|9w/'^~Yx 8`WuW2 ̓3Z!_hQm\$GbtQFu7yVMVQEvkb|K]EmWp0K(3In0!WĠ_3Eeg+UToLR _悛^y{bގfǠ~ #:FjB#,̠7ɌmjJ-(VQg |<\l 7&|*RqS9*S8%6H.\zd|ĺ;,T*Ի >:j澒K~R珸z滓q㋢%WϮfFs1E1Y)uŻ .Fk qpܬ/F'C~oNLwȷyABsUD१Ji,23={+vf|sAKi bj&3#h1ck6RO*$1eKB9p:KCYãسc/UthZN6:tbK}]59pw*Ė "8 x},Eõ3y=oe]Yvh;.k/tX$+T[FiCp9p9̀J(_GŲп/f`ښѾh9GjgJ2UR_JH~/$~~.+n`d)Y><- +u0}V~N,NoAFb cp3)θX 1(l4(Pql2~NH$Uatٜdl>\ΉA6]·tasS->J8yMNbTY _ ]LS'O>Pf2c< H;r͖% W^:Eo/]T rh򜌦sT6񒟝 K$%њaHT[ajkF}tD0p=V|PAHZ ^BlUSpV%J{;!)(PZ2uQ1OcDhe!$48N4B4۹Ō G{ avXmE//(I!T2'f^,D86iScJ$ wΗr{ÑOӥpMLUZXN26\'$ HF,/ kntĨE jHԢH+*%6r{%X~W6*hC0&6U 2w &DZrl*8#IB_w J逅l.AЯYTHʉo~CJCJpz&ŸjDiE --I b,ح6<{B9yEg, 1r C3UFBKt*CGvql'U(XD":CR`EXȪ'*ɝ_Vir y8܆[[d;68\88 O]r,w(ZvuSuo߬ $XJѲz-9|JPBcKU^i/EZ2Ecv1 z[ᗛoY oyƠ!WQ;ꘅˣCTH!UuZeHKz qb+QM݃oYW}H`y Mai+͈[U|ػE=w,%v+ĘZMF{<ImZ1\!J1'57KqrC'Bx+޽emsQ%ΚV>}޻ʞ g QsVg[̦.[]X0gCz*3n\ڻV+dݔ( 7*wVз W\ -<e|mz(&_R,(G2d#.[dNq`Ʉv@dVۻ;yLJdpS兣{P3&rUdfYg7RѤOAFvxI)31$)BR[6Aa?cl-Eaܽ}_ ު2ZѬ fb@4^ KE G~ *(z)@# 爲`r87Ą~ԡ44YwՐ#L/g+x,T୮!vgN:+n-,WZ& (Xh fDDBGGH &ՈY`:*kr:SNJT6o q}5&jmߖj#UQ.KGkSmow۹dev_ [r 0EtfthvQaq`@֞%VQ6H\#2`@d > ~:%!l1o-+BP|nU\wr{ٻ#a[{/KN^-i3$%bѧ?c S?Uw7]bXq8Cj|`ȴHUhDZXz(Xm{vԳXaH4BTPF8ɂEOx^"\ϑf`o!5ĂÁ*(shmjSfJ^k* RG-^J\b+j"o\\Q$7ev|R *$$\pYjbJLùv1ZļFƌ(FuI<>2le ژhI)uIۛ(j7~>jTWcN4{1YY̯]XL/G&XEpJ=_w2:ENw;ysExpt}JZ*(3]|0qn))YkeL2cSFbIDd9zVg?yI-Y+7F6%u&[[S rL3x&IznI3[MMI<8yӻO>wkA4}FvGb7xޭ y&ȦJlCͧ7 Bݚbc:MQǻq$,Sy曉ޭ y&zM *m2ɮ8B5%#$%1ÇcȳAjzY__p$ULl /Cr[G6x!_+ޯ˄i˽}x.i=1DK}ҿhajRբ1Ki4ϧ 8k6.§_Via1гOGKJ噿f::/ > eO5bzj8L U02$pk5--VUTڂjYnWL[SikX㑥Y3d2RaB_+u 1Ƙ<ڈ d)28(ښ\yڝc"vN\3ɋ6$1~ SCq?K+.>6R!ޱK*_4RhRJ́( ꮓ33TwfwF国3a@dяd.|;l$Bp_|g rB(MbvYuqn1}խq-^SY-^jhH#oCP+\)xX*\h9]TpUJu&s_Ԇc5pa_C{g-e390 Nԏ)g'Ř R8ʪ3jJ\έ+Cèx׏uJ<4 |S/4N|h ^/| DیS )~Wa9#o&^xǫo^fNlEAbfELyh?>&Ӈ4#w[݃\]|Ӌ[s|Eʡ.n0xA9fPXΑPtj_jocɯHvTpQ4xѣKGhHrJ)B1FJTDuͲ΀+V>,m?92i03,aFoE^x{Bh .q@ݕ: A4%KP`R)y"ԁAZͽ(hs, ʈXx_A (88hb7̘j(+ %-9ٻ6n%WXzʎkPl%:ٵ+NS*\%PEJ&1qYg84OȬ,8'@j!J L% 6& Mh]'ye Ԉٜ%37g0) x2yhZ:7@7AV>~p:|Ď>^޻>`Ğ_-AExdo/i8OnLpҪ3 ΙO^ ǧ߹ ? 8Gl6Bo2M}7b'cCPL*'_n3 /*m$Jp9BɳsV}iD&=q+ 6Xc4Lx/I"sbz*I& Hx:!L8J h}gӥs:L܎'WgwN.Gwo[:7=T١,V#+#]˫MDv'șl?T7B7:|(J|_qwy}U,=Gg w04)m8q\h<:eօE:=(NGyAȇm䫧,0A;ZO[v' UN1{7n]l7RI{7M&xN6x5ԩ$+n_Իa!oDؔiwLowAtmw[(Br+n_Իa!oD_lSI?.5? d vj݋ ɤ6K(W2luzb糩<F_]77# J'$bΟXQ璇-ƹdE3l|qtSE&ؙl-V(hZ@NbAۡFhD h5\,4wSq* lHLQioz!qVR7=T!rqGf"`db!WŽ-"Ϟ魆Ծl"d=_g"ͦ_uV"C+ KAb/܂\a|lrZ/3w2]w"m,hckXk.\WJ[Q-pB_ATcE̚LP l\#sXrflbBSS,\ Vi0F;Yܗ.@7N^>%%7qJII>hlYH֤l}Ȋy?^J|ƣYg"m# cX; ^^1#GՅ{NS>zn9o*{Pm@NjwǗ.v'U\mpC [Qv [ W e{wOVf뿯X'J./- BRh]vr;ͩ}ޱ6Σό˰i!5kO-ԏ ].ܘ-9o+Ѭ%ϰrO@jh)愃^a3*+*̬\rJBVtgPiX, P2Uh W PBF&s{:dQ 柃le'_ğb2(U< :*ˍfgC~\ObB0Y|o#q9~܌7OeeV L"Qʄ$2 Lk 0K*=h4Bו"W 6t0<@1@z; b:m&t0 sjx%E-bݬAoŷ8̋3/:nS'R❧^wƴSO5I1I4K)JG؊6Ɋ6b+FQI[M+O?_7qCbA2wt|e <]4{-:, y{*#N齿^*4؜a] p]eḢaUQ ]EdNJHGkv$Z&#|n |5!.He*NK2JlL?JNN(#zpL@ x]cDSVPĥ($$kNDS$Hw@RXFjTRe 0iau:P}UI=袞0&ădE^rhJ-JIJ:o10NXq 1͏$qw|OF m%8#u"A&ۺ:Ooq8 ЃӻG G ={Y; ~xrOgg(^Ug@A3;|?O˿syxpl/á&ۛ>?yUiaƣ SF_)ze1>:tx(#ZaNstx*bɅ~Ny5je2yry5Rh*fοHY># q+X3aP*'~L6̿-1TsSq4MVvKh[ɤނE *\E,~qS^ܴ1FM6Y1JVzEv,e2dY;2%*gHBTrPk_bv 0IZj]~fK얿pd̼6߁ɖMU/_I73?~8,@ Uh @Șޘ(! USAv e -0lu#5c- y]Xw~8R `,Yz,YBg܅ĭ®@Fy4cgf ,J4NƇ "֠X'u 58@lc^ ޔgES D;a(8jDK`*VȥHebbD%7Vzgz\?ԉtQZ.:zQ&8bsHO uQ:h'u 80]_s3PXbwݝ!,|t'=ADvө":,TP+ JW T !L(G} B̌I`ٻq$eknO6ޏTVںMj2/3,y$9נd($HJ,GƯXLc/w0$`~v Lx. @SVJ0ppp<08؜qH冢PEp B,7A6 !`Hbfœ^lU(cm'p`PȈelG22FwDRIjMF(浐)v 9Xk0gL LŠ 5DV;LraӔ܀G~ NA,4>?Fx|2h ٦?*%WFiP(BLk!gËg܋bp̬D2¬ , ArwUD"%7 (ɯQTq?=\-~sgRYťkŝ^3/}̫9.hs 똔|{9$0fsHl/˪d1=][)f[+I md#T @AR#vE&luw ֳ>4[0ɬԌ`rlSx^qqSidF59SKs&723 ֻB; >}o&=ޒО/V ̣=8' o{8o,2؃ɔgɝL 6im:6\FR,|E'L{ڼB{[t?fTwJܹӯ{4@)PEMn[b9Y'"Qj-\p5)䅠=D0>Թ:3e](C(G ʁ6Sc@IP˚ VX0eX!̊F'1"Q5q}T]'/js#k̸6)H22C3 Fu35cD2Nμ:Ya3/$³yp>Cg@nwh%T0[/oM9 G M7EP@Wת%MF5z `aգsT25/LW b]G^b4PAɢ"oMSМ&1 f#sc5Thg$IU@q0t^q@xeEwEԜ%tmEݢb@.3KE#Z%,y⦊E' MpOIjGz}UݚShYo o X-v%L{js MWݦIP$խ8QA{M=>RZW(~w\@JB2uoW&$H6X!X,nEߌi`h IpxPJkƃH`AinwIa +"=2avpv%1s_ 20-/^4E#b6R嬂OOl*F?#/J&yBzlmRxmr˃Zk@Iuv.aQ~yp:(nG3N;؀t)R%qǔ>oQ"TCI c4R*5jMar :Ky*WWo/a%g>W\Ck&A,CY m6Kkd5:eJUbnWB_ pz)(+a o OGtpW?eE_Q#Қ0.5\U};G]yg  ;  R.&_k/Wcs/XU?;(zɄB4~QKSR;lHG`-J)Uʎ`={Q啴=I͢8$UB_tW|CD;5*7p5& U(A Z!>ٲhk6ڧ8 JڂAf1=xRyD%{ΘlN._.⽲-6+%a|I-@saH3R^;}EB\a߆u9,Sz3 D;41CuqCjQv(XVZե+E:`-WԶ:)`c64 Ƿ`R=4khTi-W-hB{C͂ḕ]\nBh)&Jk 3Bn~ C ~?4d}c)GrҠe䂕5 s  6#$ǵ4k%^ax_력GVHCKңBX.'.gZ M,GJG6\a }(-qhLFD3E9՜**+(-su諧/ow֥M*v]yJ6 80|~y6/yp_k>Gp?O8s=3k2ڮ 7+sҾ7R< UE8!pf63Y+jݢb_gyN[1rsL~E1kн+lWҳC]e#!߸|bG ^*v}@햊A>v; zM{ڭ E4IHWŕM-,N k֔V+NԖ-h*JJM>]}a展[o:GRn^C- uSpoy3]LƋOYJtRUrg K 2[,~_)f пt9phd%Ш8q=_Ln2߱ρ=W!*\rzc`j7R;C* !96ˤV̹R%8fe_x>pB0g92AFP}3/kozo{7.%W`tO<DX~\g|̎XKl[cix3|) q`@NF1⹬sϒ|/'q6YGbbC]^<8fR&!S5ƞ :m_ A}|`@~3EwLiS%j7(T 떡>AZPWjToesJCcX $jEI" obh6X__8 "j63ˏgbrc/֏1wX?;aR~J;k8caI?7;^Ink|A^z4UШWeZޭ$48F(%V:C τ >ERd E)~kl2̈́̈́"z2AIo.RbnfjLʊ$)UhlsT\X鴥Ze#"XuS\Mm.9Bb!swvkq&T:c(0CžRscf>3x,عO!E47&sBhN)= 1DmsEY}'9՝WɍD1ك`kZ%P(.8jI]\NA %Lv$*ϵl /~r)-cܸГQ@*5U{!/AZgl DO5Nh?0? 5w^_Ǵ?MWkxkd/9T 0'k=8Cvr3!W (qpN:5hIiu'"5q&S8I<B"6<il@!asBa qFޘL\sAQ 0(w=ʕٛdcw7kڋj(6ߦ҄ݣչYC07nLKsT+`- GmZ )~lTꌀآxWL`PKK+"sJ Ql09CtEzI$ 0̥FՎ@l$ ŕ X cLQ28LsJ2gRXl ߝ˳);[ú<~f`Y\k ¾4r'P0li>ͶzWB 2=|BP#"?K޾yt&G?zx\#:.ט+ ? ,|\K *7BUY,_{ߙ~:yx$248[S{}Gwo&Ïmyϑ"!nǏ2VnA\IEޚ*҄kTbDpt*4iB2, ѝcrL|4r5S]18k ͢Cn5 Z+ǟcVhOr2RrΑT ~G@&"s]]4y`)[3CG YC@d ?䘡<[fep裴U:t*0Ja%(30\drmD" WLs+`Yp+!P.%FH UXX +W!*/l7\e28BB+)՞p>c$rb:K_*Ai%x&BCӫ9cܕvck!#b,&D#*e D+24N52 K SfBMkqY豥ϱlsF:G4uJra! wr4s'\jbM3S$P%k޵5m#뿢=9qLRw+Ugw]:p5EXIc'~ԍH $Hc8s@t]bEyoAv^,G Y峰4b%uV.?( ?`h~S% @M`w9w(Q졚s88 -볫  btm LC-m*e)O8a\&RS3ˍD)S"e$1(ŠE.Ğ1~[)Îv !XsLl&>&)d 3Uy՛߻rU$i<-vWG؅a4?VRW ¶Lur}i_?MA?ၻ_|%_i^|]` z X{N_:=9w.,~-wrpoui}{Wx(IS9x&$Hߍ DŽu/l9^'[Y  u7>/ iH>| $ӟM?uа;uW˨n:dP&Ya6hgSmdˬ<8OPWJSa~c2zwt;ڔriڟ'h\Sθ-DmzvXG0|@0}sq4d ~s8B"]h7ĕ{F𶣭`!1-eJJ]}eB\BDc:D]~6Г"Z0{Q:hֻ qU{9嶶S  GzF~f6Ky)@+ө{{/uǧdu8C9C>Y$js#BwmzW׳"҄u73{U?DL)Bn5XcB@xǐ!p֙k$5WXt5kO(v` 56/>Yibn*UX.F7 ܱ#rTi,I9xX*ܺgnTVm.֖ik~.jgK٢/F{u&;G ރw-n&&4:AXBo5VH~Vtv_uV-(G!o?M:)p32ӌL`8w:he" [n(u0)Q8V>ύR#fbp5:)oP+v.(k_q`$2RS*0BQs`J85pe%ĵc\?q7͈K3&T -YLRXX9 v%.KRT Zv^}]i~c=/ Ju8B]RXk-LؙI QLSn(KIr&4,O e2)8:E,L'!GŬ{޿~ +Ca#^E'Gݖ26#0a 'vFz`:["}J&R(jA3;4G7?`V lbT"46dV8&pꉀqλNJN 7R_Ҩ ր%S4.\O2.Pg RoV=hS>\?_7_TMΜ2p˓|l6|X0ܩ!O _B;"A ~s=͎_sMgK7w^xuO07v whnUޗO>dh<N5DhՊ3Kir7Z~cri=r5`~Y?)+Z OܘeT&c0V\葥,}r=ƏVUXPD C֯!(Db5?Ī40'.cApWbИs2Pa]

[LҩM)JQ24CKiU=dTXb6DF3Ș2 ֳq2=|E:E%!2~UN(K)V_0cJ fbeL$ju )L\τրZY8A4Bi@Y 7vh!Sv9)/|LijsƸ|Y^քg@Il&EV<#dO߼PmEzts̒yhk`3pZ:8'{'uӌ} t$R ܻ]ʾ>~,ٽ-/"~* /-%(|3މcA0ԲX iГQ٦؞K8N+FjGKT!WlA!e$Zn^h^EOR ;dpP n@: p(Ua)R XH(֌UE3yF7"=Ι >{"<>UvƓy!㝄Wm11BXVZ'+0?.\dYyKri?ٙ?G_4d/</!UiX׽_s?EĜ~*l.8纐йg+5BuVj^,㫐o&i 371o-)~]S'HjԂAsFJRɣA7CWIIpFr Cd*EA0u,-zpj A2_!5_g`X;KL4_Ю^4,['u-IbYs0 xB"9([m%zM j<`E#];(`eպweі5'ȳ+֍xHMU J.i A"T=&ғF9?:ˆ){*)Zfi-*ɥS # YM$&3Q="ѽƒ9Ԭ!"HٓLc]B2Iic0fdPkBJ1jJ1YJ ƌlE0GEuzH.Z(r9SCPUYv^L/}DօzR "2DveiS@6#SQ3M8`!s,˥i%hHg_ #KFl8D&!IR c"#ư'R:a)0L\ L(h*` 1QƑrV)yJy[ y[̦G-~gW{Ba5R kN5ǿ* 8<W5LBrE?FDgɖ0mdݨ.Swhb~3]|x|_ه( <7O ]b*^=#O'>r%B)^T1UaBi k; aK]Ww"?_ 4e;UBG:adH_AcEiĂxX%!SqN I2$UZL&Vg [~ꬄF*=Q!#ٲjɧS=./ !a\u|3 a͏kA. y A`c s!u!.Ic ssS|üjasL7ACoWbu\Chz*x4Yߟ"Y_GZj1,uks^޳0f T\o TivG_xO&Tc5O9Vܫׂ<Z}+wws =bг$N$` TYjI8穱;Y#)OJEETZn.-(*E 1x@J͉C(AHuw)uI9ź.6?K#ǔԬ2)^B&:) z{(XImqpw̧M#0#h% [6BeWΟ#­(ItltDE,Tuz= GhuKXaXcf>Ai/EZ9!vV a'b5TS/GV\ ~B|9=FDkxf(IN`*\an"Gah|UeӭP/f\QľSL@j^P(Hzj^{Uv1rJdLϟ進kEֶN [?Fr,/dqmnt({Y1|97X O06\f4ԦXj2l2%5>Z's*#Bi%Tf9,@̤bߟ9["Nb_t=Z'jO_~rSC (~T?#,Dph9j`Τ*M]B3BPgGy$.{90kuCIveFH;\yTp5xi?{WǍ /MHо}XOaDɦmOIV_lTCDuu!%D^Dz$^[i,&A! 1E$.`Z_щi 8ny>dWGڵ!F˵-=@*CDIJbq:;}iYկE@oLm̨ :Ըr ZІ~Tkѽ=۠ Zmi{ێUt@B\i7v^ūI{W]L||G+{|͏P})aH]5B!ar=XP~w]~QX1F oOvi}Tf9/GEosF}o4N]Gl1p&F7AOl9Kݡk:Flc;T`'+ ~,\`wy)jQ%[8J ՞v (k\/VR/5z#w3Kf@6m66ñ6ar\ mHւ\P" &kPg3  bM(crF8\Hkŭz_Y%\kҋ\qp~8sd%yGL&|~a׿|WȤ8Ӛ-c1S*KCҿ FȲOųFꨱNR9Tnq$lB2G،ǜ $no' rXqouk4.{;q*NLK܌pw>}Cp#c SLVӒ)M0UKBJME"9PB=:u%|1ƣ_C/`ޥ0Rt"UD9 ԩuI\}}XL~ 3~^OY׼jFWXZ |r6sоTĊ)%;Gv-+FMYL$yzFF,̡[ |-'hjffI2)'H #Ҕ, ^)EK9PR["k -W8pr{ r9kS0:G&i5 PpQ뉤s\& ",:5˒JYϵVq+ ( (8|m.IF3gѦOjaP"Fu܄ivw%f <{Eͮ8{$wE'a~=|`O},S" W\?}}W|jG8->C4K'7'~p !g߹^*LfࣨS@O>>RJ/}qK5rQ_4٨hJ bTCbE*MD6)`^55l9O÷X̅: ޤfDEuDih1ΈdԽoC֜urk&=WWvfL% . 4JkTR72Rx fio\\|>"ܷǑ m |%z@q2eW# e@~;Ww<_ݝ7娃'&yڬge4Pb H^{*mt*-fkq9ځQTC ZՍnxG+4_] 4/Wl4OICZ+A:0~B0AX ^b )76oXFZ..XcXm^j]-m+<;CsDanmˉbGv$d騸&QG5T2yFܚF.)]$ 76'y ΖpC_NOf~8͡ 8Y=P+ qK}%AV%ۨ6k(A{5rCDDldhP&r< Z+"E# res(BF3zK(8+X 0g9p. KTUzFCmϪH7 <3D9HB @K$k; VqDX)B$ZEe #)9) 8=s2kw>wsos<}V`Q/i@oBDU=>BK ;ߺʍspAډ h\ ZA^Tߡ-ȳp9/WU|%ƵVz4*bt*r3*qNYH<ß|p\NI:fGLwd-%ifHm$G[3RMRt.nǿ,1?\N?f7ӓ~No߿{GqPuF{c{4ѯ(s*X( <?71FNbBK 45G }.Tz!1= Jp1,& EBؙ1Z<8ldה6f(NTM(n$Emq3b D֩q)T9Fi!TD(!g O[;ٙdf*X%A$$S] F͂)jک$ x7Tx=uRgVcM㢍]Ƕ?nlj9sd 3YRj\Ɉr>5j̳ "[fdШ# zk7DQL5kZ>Jwu*ar9Um!ٻlZܻlRhTpi9.kQ]gͮq$Qq~e~eX(r\,ܫALA`+n :l4n֦Okp>@}^Qв~>g ?\w 3?L4BW+|qs7HqHZ|sNPP$~ŒJ 1/ 4Jt]Yռ;aF~EKs9U8T@[r)TmƎ;]s_L|Z5=;|3;ufo"0&e'JIz isx}9IJi?Iθ OOZf bۭ5.JFYMJ9Xŧ+38eIMl0Wo B.bʋ_9r3Eu6C5`r9(I< q}3i'Kq@ޯcTϮjr9j$ۅ#VJ@ηYfX-ޔؤ8ԵxcO={Inx'=xSb rx׹cXZ)5ےi[ȞY/RS2=eݖ>&8E$nIRu\fDwWlr"1℈(9L3Jf&{πFAMӪYqXUST.}g`4{!e b}؋YsjIp+ȅ85vw~+?@:&y6Λ&P"zb>D] |1*jV|YW([fs<(+-{1SWd%R#jHq8&Y" @id?M]܂*U7Jcx*^-MrT8KBʩSTWX^ ?ExQ6O&pbI;(D2FIDhXhvj %.H,k}M{]$A{u=Bf K>ۓߣ\oYVV6sB{IS;Ђg &k.n4zp6n&&AECivA#DB{ G#r[ G!YLK傲&&9/E+V@dY\vi'E;ꌻiqi+j<"8O2B8-jZgUNjw bAt]xxq\uh0Wr-h5 5v+DmhTxu;2 wJDͤNj5.Z_hi-d ^;]KcDd=F['y:0͋f?94sa#֓@ʭ'EƹԪB+ )Q^Es(ʯg@2ƈ!2G\ /Qu Fq No+A @(` bCwGqxqἈ^Ӄ3p3]y[pTKŨܪ^}=9>\#a5Z{z;E`vH&\gs_Vo&$cv;Bf ., Sظz}=hFPEedqbGpwDb"_wX2V(,*`65 +@`’DP=[?'I.'X\p6-ŸD2pXǿ}%@3Q}iJщc-F̿Mc΀T>`sٸq)*^=6SĽrjV4sYgҿٻ6dW#bOvIdC]Z!)9"}gHqH {Z`҆J+9Rp*kB/٥jŮzZ/Yk:`EžsA[G4wz^A󣯏E9^`v.IǓu.P6paɄr'h9ݬgZ]0GbNƞ yludmֲgrt"%OiѮSH{\H!3[:^RzmZ;Qs$n*0h^!f7z4ncn֕;"N}Zj'&\[FLgç!Oqvnn#Oiu@l1~HeaC=4W QԩbBsO{^RdG2Ao ]Mѩu2Ú= /Ţ/Eq9YZJq.V uՋ4Bt t߹^: u 3XPLY_NtPdCR5R֎\nN$dV7-cDʅz ``gp8hxꏏ0VئՕxLk}xe`$r)#*+mW~ ),]}>(Qɻue=f4Pz̟Ua=Hr  {He,Ɠt Ƴo&>LݍG꽙 5o*w,Iq`;z2_StD-Ԫ^Բ/*I *:,0Uq^K7hPlv&M"wD.rB!ZOVZci#3:2$*8"aH ] (C%VCs_f0;b$qyx.T }`&U-9'AeAJ~"s*ʂ@cAN <.sVSK%,:e*$N1gc0#k1r5$8"10z)"bPA+g%wDΚEwd-0j6y$DC'7qyH r& _dphJlGB+gN=A3#Xz!^丷N*}:9:+$<+/FŤv+/ں@CpۤrcPv!gmNI[rzjQT=@My;|YnN&@TzF#SbB#*7g M*B1d]rA=qUYYep <~Aآ!SbF!3:. BMhW5D"[nKyoq?i r:X ;(Z;%'m\:C[ -%DVGP_aJъ fӉԉl:yI&爷1btX0B!ڐ%gpΛf0-(xP5/0!LU;ni9}aqIC#`5:G3,F>uGJk \}o1RN{Bc-9\g"ig0jiVug<^` ؐł9K (G k .erhW_fғ1O yW&cRێy>ٳFvt(oI5M8g(>t Nkg1㨳ilz,⾚=_=9"Yx][ǽX[??Wg\-+ut^E쏋'8]j6kBX tBPң_t~.>&`ħߐJ];I=R8h-1mH#F[>-ns2`{q {ÇfEpHQP FL#% pH Ҡh=RwrjY^K,:'^)g &)"Q,lth09 'T[dsSjuSmJCP4O^O N 7%wideS 씟YҳP-h & f4K%}>s'JerfRi*i)Ʊcmrt,᭪PNoQ$2oetdXLa8õ*O.Zi BR|Z ލ#ω1LG88pE m =zl?W 2DܿH R-+ʏc qݭvZu^)JhD.Cbb,0v8gQTYR*΍v 0[w!vԼhEyi8%4iJzΝQ-[8V: z J8㟆$-Ϧ> } Eש謅%N%w_~2?..@s!|5,8wg5zqb2yZ%\ǔRaBSdV(罳ThI0dY?^PZ0{cG!mdT\yx݅5\'Z,U7M>kYXؔ BG#'XX$)F#!g$>^xȭ %m'ЬpF)gGk;VͶpc!F;ߥBT?/3f5BD`yE`4&h0A9dqetQ xiTHdl$A>DKC`c[zс蔴u{4V E۷fRb1`80'2RIn?lE> n"Ih0lc`@ 9C9o;`>xBYbHu 6b4irI>s :U ;`¼Vy^DDm4QAa3A8V/%Nm5gdCVqàC-%^Wwk;~vdq,^dږQ 3*̇FՑ5|ha~)ntU:D2y{? ЕxgW&]-fW Y0yʘ׊qd`:E1=eh A07Ziw4ŭR5SR$5Ѿչk F|u_~8k-)V GLߜfՑ#H|hΟt6U6{w﮿z|ˌV.,[xv!NE=8 lՋ'ϵ$#ZYU!#/]I }.`fwѯ 0 BG Ξ  X6Gݶ I˳ޭvoã7|h8EkrZ,|>)3G`fHltfw!={CA1q@,>A``hĄEEEVHJb4G L&PBT貨74!'ZE~v\TK·aҕW?9 :jȵ/Miy=4-,A%x&K1j**Xl %ܶPWWRN^*Ҕmf%MU(lp_Z"-i+iV@`elXpu; cTXW.*a.="r߿ҠL 7]T-1ʝ!%T,eutGR(O.jE3ׅg:ΗZQޭ:[jU!\,zV|ߨrI[ܐpͲ%ԵF7Y r1H1g:%Jbۣ[>Rքpͱ)ٺ71Dzbb:Ϩ3]n=괒O85a!/DlB[נF+E/8bb:Ϩ3]0tJG&,䅛6APF'7:歯+ReOkXvl.[5{u5/i˨_P˶J $"O^~anW-+{͠jR9n<"F+Dqa$׵𦇂Wwv bDf 2~;!UF ݗn+.检[b?ĽA=)f~0ʓ/^wnwë M0O֯/|80@m,:B ܈zkGʘb$F;DWwSݐ.ӑ#6:\&1`+<2\Z!\ lvTqDhT[l&8\~RGiRoQ:[\п#)NΚ)7 >i3BaY![q= #5\YdLBxy>QeaҥbtW O ӊ!.>?ozNLs)evƉYkhKA5j KݵÄBlk0Qq>N6Q5Q1-?զcrsv?ߪIz\6Gm^2RL"h"P*Q(u J<$z',ѭ-4m U<$R+eDQ2r'T4,{ٗdvqQ/=a6}8LAIg9LڡN,QJBAۇIInY6x?5F7 B=wIL 8d2Q;~~O4G M9jB^f=#(ک0[ Kdl/r ?IVF}C 9FBs;l %QZoW۫<_ƿW^O8oLP. dt"z؁aΓrUz@Y1:$܆uxXz#xLXh%2ς41%:-Q#?! <^B,s(s^A |1>oCw7}䵂H@s$CЎeY"Ξ:ƗC}} 4G%_=mI[XndAxo.Λzy3b 5k_ˁ] @9}#~+Y4N4c ⻂Q{R2Ǽwcnv|%_]8$5"۫/zv3գob09G\_(Z3!Uz\a0N})w8?o7^ $*/Îb;vT6L)둶1!GX Ui1 RH,)J 3w>MN>!e~AysG+lL!_Ii|am!BH,*ScF2MFXY!xP)v흯 eh҈G>CkMFy , i2c sj)]wxFlVPik]8CGϐ *ZZ { u#:RݥlfiÂ9A[-4嚹!AHubHPH\uNlìJyT =;#O9%a/,n'Ʀ9$)3Ua -qD{bjsA&tfRX^K:z3EmYE˝ BJ^ 8KD%r)ۏ`yg0 E; LD$1*5"S0qZxZZ[knDVTgWD:Ϥ^x<5!x~&ZX.q84 C  fap993y夲@ Zw(Сܓ!'gaBaE:=AcVþ7e&zI[ _?^2ŘZfϸ{!aA㻫A1+?EM/][~ni> x@h $ ۃ۷5ͮ%bG[&|vSTq"R1!8Cg˩(T+53Xs{DHqRFE1]#K[8.&Z <4" #fɟ7|JA,ȼ5^xH;fQ'҃^+cDJdG^a[Q  N2VVöW2ҽK9%JK(`9FC,c?,Xi J1<]MP-V<2+6R&T+0R99iW[hbOpw %ȅő @=[~,SYyH߉Mg<Ħܣ!v^ z Z/˫;M{ m] lVK~<|#B>p+Ҟ}a6޽yA~eC yg j u땤+I/^MW%m.N$Ju4ʂz%pT;uÝRf' 3ܟZ{su=>z@OWxW Gw mNKPRHӴizizIRdܣi{xzvڹ23Վ%So[}ܹV^IT*ϹEۋMffKX i`o58Lg Cs,xpPE _0#$. 0C䔠ˎH+5{sNI#ʙ/eRHK#ljR$A 4`3ÅR0jH)aH&J=7,eFӀ"(͍ ߗ *%֚9P)mHPa B7tY@;Eop> NÓ@(C5œixɯ K2صUu$Eo?3. ){? ~ 'EI8r~Zap9R/}m=tSq>(?DD 3TN1TC%#sr9_ xnۮnY23jewZ64!TCC 0 a99 pyu <,FSi54n?n՜]ڹ"u@ 7ߧk.;֖JlBpl+@B0!)MeEvjJsi{RĂ#X]#g8H=8يwG.8A:00[,~Y_M3٭j\%]7;p-pͥ/UO(8YWoKbfXhr9C69,`%~nvlƀe\rp7ljaRfdt&&@A:l[Ч)'b% ^XQ N BA0Zޖ!M*|v?8F0)gHIi&+ ULV# =7+002¼ҭ=83Uu%XFhb{%V$(1ۤ :gH 놷;;2mx8y[_-O"h,q>~luF=U0PD⒪ฒ˪BnL|Jj~ ^0/% o~d^ۆ$rf YIF9Eųݑu2"$ ޱ:[{TI8=Gܟh1+Ez``6r#"ޮ4|+d%l6ndgt뱽dnGdj bŧbzTt4k$!| $u\!Pub b0Dsa{qcL %hRôQW)mDQWiE]5̀oR׫/SICxLj\ L`(.~3s/p7Ij}H ڟY^EmeƗh2F4((+Ym\64.mK Kz.4zc1qw]<D=E2\dq{W.9x <,FG 4J%@a-USdqbK"TZMuȘUރmO7*Vjwx@U(8AϨE07G" NIv"͹O(|S d賔^r젌S$1kȌR區*lĜ}!~ш^ٷ&cD(R[oe7B1!-K#W#©Og}D1݇ ? o2$^oVTv7ya$ xw?|p~6/.?{3}(#*b{re,ep~Re42$tYP8S YhHCST/R%PbqX逑SLlc~ `?v{ Ad\{rZ#.ݕ&'VJA}E}xv|qPyu~u8YW_,Ƴա&ʄlgoyҴ-TˑC;۽ 9%݉a]~{Mew_ kG L,/xh]w),RS;!#JRmyP\h4 {R$k1n%:oy|8lj[I?A9,B[h_ÈAuAZެƫ- t3GK۷էY2u4XWnWpUxIDKQ8p=u7p_~mpiU;@Su#;fvX5۔jz y2Âs%x{DRQVt1W'Ta|Ifæ<6__!vM][v"D4iOf鐢֐Rr/*VZ+,(AYYhj8-pD@+Y(s'?|>GO*Ī.n'z+7aTWċaFn.l[Z?2i'N;[t>+)wٶj^ɭZ]/; Q3L3%!X#Ė#Gq>xH H߯%]QCH=! GY X%Hzx!X}Sa:̢طЖAKQJ %Am)v& €TJJ;7J'UA45R-*R)0P Xk KW(RB&V =!rw9XI&"oZ q0 ?)WH^0C%dJ WT B!S %lGyj'R!E@E7O +Ftz#I:T9em~:Rw6QQiP:2yHL,W7hž5vĮ7?Vr?ߒ >O޿,z`s_G]|v3)"ׯϼz\m]UU̷gS<~O:~߹7[7;W[d__^烀d+W-|~ 0 Slsmj8h2g &Fc5S} CO~?MѠ'j`F yu Pc̸A:U;p~|рf q ;Yc۳~k^Un{喓PX'R0 {mWIޥn3r;$H7: ˮZj)?{'MX6ejM8!oA { w3gV?}FСQmiDwmXoXt tO=T/nӫ?n/6 ɭ"U.7ï‡h֘Rt6yϺ1l -&,Y֭‡h1so!,ԩ4f-՟\zr%Jf# ʡ)pHL͸0ȩ)"J eA~%ä 4R"%f,i05.ǒ1aZ Gĉ3׏PX4(aNceH+C6p+T|8~BPߥ G"~Z^XLj-VЬ>]ožt; Dp(ALfF<͙J[9SI2l%] &(ésf"LH2.mu2aFai^T#pG=<a8q5l--` *e!vd +MuY($hːj/ʪSaApg $γ&;4:2N}bBNBfHDfB$T %b XMaR:ZB)/,IPQ*J\`E04Re9PFq @hP)g) RQK%*\a17BQb Bc Nlg@%At|rY) P)F 5F,ڷhpib=JRbZD&`?5r}M|Pyq UH-s 89 dwh" 9C )X%186^c>[\ێD XtlRRlۆQ}ux(e9P@ݥB[~ :}}'.,ڪR e|$ 8gtd >%~:MudPڻhSPd0vHi;vf"fJ3#̵JA}ؘ)&*H$h&4:7 O&p@ز2eHɣJ cxc߳:>@3 =m o<3N,KXOi.a'v!L @A4Q0Ԛ5SkF|B*,lpŚ]6C_|S8`L f6J)W-~6~{`_zmߞ5k$&j|b+B۟˫0W˹L=o^l}Qq̯od\rٝey7e|?v$Y?EA(R 0#B +m !?CHA+Z 9-?8cf8+Km.\pE0,ٽk9JBAAbER4) Dwi7CZGOu2IYwMSA)ap9%\1CNI8pk>>l 719om~u/$[⼾ Oڛd“Ӄ+]k(I&%6LFn{v?B8~ rL%ps.Ri.!uN 9ӑfdl:{3Bc^H3P`( f'*MDes]Vepmq/lK~Ot]dYj({ͰU$7vPPb C:9:Zt::D?&ɾoFb t*|wTMhdeO6'5nt)*ŪX}¾t1WAJKFTݞV^7TWWB橽 E–c[y 뵹Xj7S@=]H3~|{I򨠻-'{<5{pf SMPr цטpbCZQ淞Gz$vY S y(lb[۴<^modHjvØ^ϘХD Ijņj}|}\, ospJD}:$}!8B'p_#a-{FGH/;CsX8P91ٳۤn}ua+1/kH!):;P+ gs֩l @ zpL($a%_J4J ɣKӳݎy0aq}Np)qB!" %`a-+JвsKPVb l/b}yK މRl[ c].J$lc%Tu6r}է/eX32,0LPTiQHæPHbɌRIx"NXIQH@L KԿcVJ,i%+r ]:NcUF} w\zXwcD?GNj*5S0$pʽr}z{;\Wk?w>V7o0,f~i!M^@r…߾Xxoq~ݺ|ݞ_bV2?WY,lM!W?l6Q 㧲R]f NF9 UhߺUĩV›RZ?%'{N ITZZ":1qD5޵#be~8m+ n`cO>A\#QݤPt_Un,le9}0K4Jޓt7?^usLi)I'yHW7x&DvɘTJV# +5MN;IjĴJL 0NZBPcbJnđĬ<@ԥ4*/gU,Hn~~E'o?SuM6nxB\lIDIo&] @,0%f{+&w3n7WQ)W`eІfrlu Q$xCTք}ճ% U7~_q~D~Rlv(G>I͂4h Ιzˬ;P,@^oeRNdEРHIdtpew GH;Z_P4%ySknGΕWF [n}N/ FxI$!Cs(_n2sXoN>XYNl ug҈ЪD&%d") Ɖ4`0(@i"[ǚ]?=Uw~Zsh =QJ-@=l]߼_Ybdjm"r.wN_V[<:%X.~Ȧb p:9nɓme0egf'?ģ=mi<ޤGOh$ _f[u'~>fh+s3ܸGO~L\G!=y+3OpE;Koqwjhhz|˨ D NΔj^}Pp-SZrh)DRʫOf!3!3Y6Kz#[w0"-"N[5N5klЍ 8*gow_?u۹^pw_1wz:Y)r"*4;I_ݜJ$3hGv6 ̭L *Ng\ OP7 Z f9 ) #[[G瘪yEg%0ZPvKĞ4or#Ov`%.ZBH+Z 6%8kwX (4ff=ETu:'WaV 0)&x /m׋(9H&5gv+8$rPSJJGx@ Q,d"d&4=^nx!=72s s8M @YKLt^9ʦjQ(MҚo5vܸ+d3'q!Lv:+shZ`8ôh]x+{[*6gXr,.Ix%hӱŜñhB[&`ZZǁwJ^6R\9+Z:qv=|8zWU9~{*l- WaJKjkMR#|Fvt_#4gߣA\2ر7HC5oC:@,!QM[yK[HB'酒ϝP%٠J?*@6†|S02 ѺLya)بy66xs9,ՅVzcb&BpHo T `<)d<nGFĚ[Br뮟D ) 6ix2(> ޠ鸰Ake RBvq49w;!׍;+_z )aĚqw寒]@e =.KqYBfѤ҉'癧Cf6$N1u!+) )dI#C`ˋ ϷJ}U>Eo/6xNeSCڝaWq]eD5lCK5RR"\YD>fǂ`xx%8q֜fЄpFP _Yv> &1岈$=JטT K\D!ң,fgkkMP$Ƥ!J"1kAѭ(0Ԑ٩jҌeU&A Xr(7J\n9GٝN~}o\2NiE-|Y1 {Q%Ŋi]K'_ƅԥ\MK{(8D' ݶ6e:5Mp?"\oޭ*6JOӇemW+u}r~wʿټMYEU{y&e8+W\kZ8JoT4X:}k!.(Hc묕dIyhу(rqޝmq#Þ㭅\ u@n6`HFeJ/T™.$!NjVwÇ-:\ ŝBa7Vpdr֞eѴ¨v4h{@8ts F)'DĮ%וMS[f -go,<[(l*6KncsFJX~2jHb{|"Κ%|oXPL]qS9=ڢ*';I ;!J[RN[QBi,*!גOէ:Icp2Lo?5=ahx9t(gM_<=[m(k>DYk{\YE[i1;'fӫ-ldZ>.v5=xz ¹xiOϮv1ƢQm$M,};r4uA39:׬XQep 'Nb6K<%AazOݶ@i'QRT\ThÙ&? (D:逝 1Q&z/F9Ȍڒ#ٱl*hl9a!V>X&ɂ!ա=h18;AW;m|_`ԍ?Wn׋` zzbԆ!X`J0Rپ2ە쵃3J$<-*P*8xszNKu2B_Ԭ~,%54 z'~*8Rar9CRi9od=Lԑ5B@(g)eYh`fwĚ=Dbb)($j2WϏdNZx :ȫcr:6c-U.[##D<\x$*wW"Ϸ*|qş??2v⟿0H`JKnY¥>_jtN»ȡY 'ʶ*1o;zCsVY)N+b ^*W.c~aN|u*oU4Ë@м@7J;#J3RB8g&FEIԚٴ̚ZT.4ORw->\|{'A3GN|H BNԩt3F|A[Vw"h%UwdOΧҸ\Y;YđhՍ7O}kiU' vBT,8!z;prlOh<^60}M*gbzPLMVF#bS׀y j# L([Ud[;6_۷ĺ)mYuZu*gz40qb.+)ϗynXE31K7"C6*lfC'u LAs&fJKgY3kJ*g24K8FYe8kٻY.C9JZ)[j:_[r\>\|޸7pwKNR4D!U-~(m=n'6d- g}jI\~7k{ssO2 4OF·$wno B~_n-O晐BxTw>VF˥/||AA\pD2OnqKh6ܚ ) (>1ܫznz CAyhr(&@^*إ,K55)gVfJ 7)`)BrJ x%Z|h( ytk7+֨Ms|b|Yfe͗e6_6gֶIFCPΐ&"E]Fcy(:#tY3ZhdM䪚|}yl=M ZA(C!SKcǬlȎp6L͘qe鿍c?b8p8VJ4G#@(2MIgu>( }rtFMAasIřY8FAiUB(FYc*D7Z)O2ĉx;6D4 c_gb#&k9WaGM`q?P/VI4q,kXe6jYGNgɳIV)Ӂ4qmiT3Rܲrʥ.|pVs{ )܎ї W (8Ӣ/$L6SXvV.h#`S{s ZLŞ~[!tN2VWR":Dv(i0O:@'Dѐeמ`K YPZjAƖִdN m[e]4 IHs`UsovkV}4nPxa^ت mtx/lWpw ުJRoDm];u۴wu{C./gNJjVӛbuqSZV"u(0 f,] w 3T7 cX}%1lxsv8%gJ۶_CSƾh|dis= :leI!noQŝt/DCs$lUvteG1GlB0.P% [=Lm6$zyFCΞP?iQ!0` 9# pYplGr`A:~4,TV3w̑c$TVWcRv(@U(s?~y(ڥ{XeEމ/"WZ:?r P^`(yΗ[ę#{v ~Ƴ8莎%QD+@ցјs~ԉ=Ku*1RN;ԅDPh 22@x`ѧ2}E9ZF1Ζ\ʝ+7m)ĭ4gy13 0)@[Ԍ AϯxcxclI1>\hE:k^@=R默ʹ.D@@$gv/Q)g/vG0rc:Ome1Y ˲s0fs`sC\-;GBR|#\{ RۄHn8t˩ɜѡ0ml6OFpE!(A!"BbD}dIĶNJʫsWƪm<3݄'}W>q;uӣЙ K.:aIh`NW
  • IZLXCJAD USmCUuPY)٠i^.Z[Xկ%3[_ADFCsy?7yE 9866y`o_.U %@`zS 1cSJ! F4qȁ9]sr]z(#]C| =0 }(|EY@zr %w]= Qa񸾭e~Eqe2hh`shCі+y O۳-NA3n ,f_I4bitt䂀3-D!8 N'"WbRַ;8ߓtܰP.o6 턈7:fe~o2MoOϼiZl>ѐ2>l!my~aZ\]H@lRF\S'* !vߖʄ)U-3&GKYũ,R'GCnrL l ?9'siaviP*CHj0D)S, ]+`W-1ֳټRIk1!`I*C][,Yk)x!x(@>0-S`XI?Rсd+ORD(oIiR|alUO9.]hMTH;Et:N^s*wr>b@@L-q9ʭF3pضzlէalk<8Lb?||3̳al:M?{҂*;s`c#A+z.o,bLjC|1S`͝WU΂6! FZ4{ -4{Q8`~e1%~s9Wl~u%/bRc˒~}fTY.)V]')~ νU8#1CK>|byvOS4<+jA 4bP'Z0g9^QЂijqDc8-ߟ6]- ґ5q(VoMbbUs`ġ׽ǯ`V ~Y#a4WBZ֍dTRB6GJ εGyq MRNYK3%jiV[XF21L,Qga2(ÜJ\wwc7}v\@jj!%a|]0Kl$/V1KV/} R*LvDV\Ȋ " 4^ti;L:ǣg aU/C=j=ظ񋭃T4|1S<ײyo=KJ l" [JqNPO]@q/i1N4ɎX+VǓѰGNE9d!,Ƒ O' aM|x<t$MQ,oJk>6E҈V\)\{(ftAˑD54fR9XTqC\MGtͧg(:AI?v \D~c։<'<ѫzY}\cfU \fƲ'N935bWgH,Iy4t<@nV^ ?I $ .IuaBͤjR7-7' (# h!EX"L' 冃mb@ŝՋzaha2l6~4I4y3561:wo2_;7,&e3BҮxE\"It{n0Ь;J9Ob7ՏZ=6Qy1vW+ xmdN$Z.²oj [䟎 2q[@r]Y @y5Y c,ʇ~̈'Cs{*2:^|G8 ޹y_/QI0D3Q_>6OͿ?d?W޽}Yg\/>~gpΒw٫{廫Y٫׿\^|۽w}F/2է7<,zޅ߮ؒo(?ΎwZb=熽*t+3rQR:J q9|1ʀD#(gę6gxyòԎF[?a}sn'i5Z1P ]'$@J{2 4B)ľ@@)aԻWЀC5k Cs$"1SF%|O.X,>\72u0>^@Kԅj9j]yL@Ӑ#^(X++])O c:6b2[sѱ$Vu.\|"ԷJ MJv[NYK ]ƶ/2YVGtۦzy*]ej"-0"yX(qVvi7A;L>l"v! )wăD)`"1! (ӣ~@Jj4[K^I3X+,a5сS"$[v}EvwjVxl) @K /${oL&){{M7`o T-VATDŵJw("bCZa9"aaWD)cyEc<(jQk35(H)qv:AzSQ:QI>lg7xD*uWsVFLSMвeE ab(xp #:TKA"߽y }MG>Axi,Ra}e84hV^SZ8)fQF}Ǐ /܅vp(cn6qBbR?8TO>np?n FZHv^"1HZ-0Tf=(}݊ZwxZy#T_[W|]NBs|W}J3EøYk}GPPqTS+7Fb 0bر0d !L޶oZz7:tZB[ UUY k12e)˰pBxh@hPB柖+OgUcC-o4Y ]=d_p"S]=)ҳ 4W_ s*W^#CONC )ZZh4eΜO-~qJXkrJ!(_ k&Ob[,iՊ{`)lI< fpRFU%zlviۦylAUC]%F1 ن>NTSOaZPs:9Pֶ>́mp i%'3] = :<ӻЖ9Z5yS8tySlm:4DgA:8] ^ٖz+A-Pkj+9;>ĖAʀה Le{V"Ӗ0> F rwە7ݚKBu8[ݕT 4LUƅ }9P]wFD˸_*<Fm98#% |'t^]&5EFZXy|֎ΖGr ^ CGŊ)0I5.@^}LuCΣ&!v|UάEbG}&+yLUOr|+/: gtv8@8:-gprS\`tdsy/:G)(ԡ vuO(`4>>0"yu+<'3#YwdJѝ}_t/_o<} 35rm/<#0km`)Hy6 a{O8P**\FOBh{O"   xd^"u7`a qffxK?`߀\Jf;zꯟǝn~R&]=p4{C3w7H_`Ajt[$5G%q!#"y@hUI8 !*!kdQo*vidkEoS G`A ~AdU޼ᗗdt7v SN7'sg$Ubɺ(@{>fvFۏ ?Wfmܢq2*3"k' T͝5(L Wwdr$dI#XJG"ZE,Qŧ_&T@ÃDˈ EdH8qn|voBjd scn3uj/ӏbNnbcH& fn 7s}oR"MJľ)&bp4~dD[JAR'&5R ƉFr72  I$;Ѽ3h\ۂi{4vvyz70:0@043 jkwJM.zU- lV6Ä2N[mIUe Vdz}Λs$&7E 椢 i1R*"](YIE\@8TknGVw:!͹Nkk_*sƥTWK X#tަZhQ1UpAhTJZa rU ZQ Lz l^wǰ0`4I7 ҮհAPmsdI/2Lt ʖ:fip ^"ZeAH[1D"XGf*C-<%!!H2х@L 4 כ`kom>{f>鎿E+x8O[9U` KN[%O]piEh8Xλ_wA/}0gěY ,AYL>hw!<\G0c SOw>] {D ;^n5)l%!JNǒ"01ZEn0a,G"=~!H#Uǘb*io8ΣVz-:a L ` m1KwmE1"rUXNAch֘#JBLגji"0'&ǃ4XHy" 8K{ IOK ",8% >Ḏi3 &]H%&R tșe0*F=F+\s h2D!Mm`:rc-%!SJo FuNTn-\-ܓJiK_WE?O/MT l13?P_jd2hSP7N|"Zljɿ7`t2`3%U̘$h5 C)Ts-4lJXJJaB  M.)F&.?!X: N>llo]@&%&=9|,/*͆(ȅ2*(VmWyIi\%1H3w|LQJ ei1Kq*f3GքlCi:O!Y)\v0jV 2jyܢ8ɳaX3l5o5J53f}' pNJDK?Nr͇0l/k{8u&QĞٵ3+{~Se]$~F{*r+pEUbW%x|[ h.FU[UC/naU\ȷ.NT5]q‰86v>?AshލChߧg_Q{/hc_Ȧ{Ҽ0=3iȬδt]QJEia$DXB[/A"mK)r97}kj4 G>hdof7r9|kVzv8iVhZ7LVJ==ZC"Ȏ8Zl4!/PS+,̰T Q@F%vTs& *Qp*~ q̬G y4ȸ?L-~.T+ؕjD^RהyKŸ3Tpv.T(\\y25"o9Jʜ_F&4l KA<lRg)37{Qޕ57r鿂6X196ֶibei߬H4qH2e,ؔʘ9kL =B(qtf8 K*a pLH֪B7KxR^' /6 x H92< 3XiΪqXOuDwW|go讝ZvCːą/}µ*j3\|V8l!Ȏ&mGPhYō:Ag{ 0F9,`j,-wCs6wd{1J,O&g"/CK[UY|LfP !n.ikslbKda~y~8vp|vPD*i~9[+t> )J 7Vvw^k+ڕcXZ jB%8gX̰Ifڡy}>0ǖל`Of}7c8%\$VWaj׻_}jX)d[e3 S{0M]c2n)8o{,)sa藺hﮝ?nl&u֢o~[gcR#y}_TaS ?bְ0ݥ,E)(ALmHB^֒)mi7FYhƃ,uFaڼ?D1~< 1hK/KU Or]Zq #L&[f}AP(,b峔wrjaEϋA/:u&OK_x3 ~SjWKE? Mv,ݸ-Nr[ }~+ΤnwSč&W1{̲բ/N.$`ǓHLK/9ϳܬkc@k~<邳BL0_l2i I|1%F9IC&SƔQe2h乡ʲlDr"g*$.;l1g.&Qr1;L6szH ez]3<@LGo =RʭԸǠb Av%>G&.NVysK7j"{SJ={q`]&y3lo5I<@n]T]p^܀JX5Lf;lZ(.aA)hp4uNg1Q* 9ӹV{0.Iy}Ky{]wv֊.5FAsVUgW0-&siD|$;;J%0hh[B㺸NA.GU%YGUh;4F5q8~ΆeKN>>t9XΩq 5yJHK \J`55*^ax~Ձ%ޅk{ICƺh&Hatw RԦ@zQig6xN =N$Oouʹnll[ OW'gMB*5U\khhPuhg#IvUT}^_щs% 8"mfJ-ڹ?Sϯ*("d*TīW;$JO%k\j\^7>׺|֤\j- *myX_gSeْKvNI+{&7p^=a0nô7M?gY]ʅ[fg3ַ1:+5r«ʼnW@߷Ȕl jb,OoYO^joPP4DsK"0?]/9RRҌ2,.'ʵ4;镳B9 HxnwFK}/{u w-3{`mM&vKoV]ZGV!ѣX^=Lfb_vl\֬NFE ?ޒHR*}bwX,h "2Dr~ |CooSz)347ߙxx Gw@X^HvoX" 46,o}A5o*Qx %y+W[ݛ䱂y+b.կPuŧ̼~7̅p2 0zY8(3 0e~|>-ǛŲC,8簈pp-F}e~Ck8,nܿ@8 qO̠M6P2. r `#]/:\~<~ f7)G7d%[_^Fg-ZXTZaWp"{#'ˇjG$rq2FOl-W#SUuHBq?4{0ؙ"}}Ao[;Nga lW~~80Xm^Ywe/+8wf O;=ܗGg7w*M0{ͫd,(a:| fYoG[^vK]ۉ:j<$\~>‚bHmb99}eH>"@tb'6(f맸ӡcl>Mg0L%[֡]gړiWL @ ,`~E/}Vݱy^A쒔Zts۠{@IŊՎGirxW0v ٗ&, cDbB[YsQk s*H5yi EDK(^\ x6 Q(h < W9IN?K DBx-N^\+UH " 8,<)f<.:G׆X@9g#"Aĸwޢ:,P543g. 8+Bԋ0gHIGz?('#o5 ^}#YbSRDQoQ0bADQ% #A(\NdDJQl(:\՘ՙO.@ 2͘)ӘLXd5nLG6Sk'Q2hrLG`> [k:&"ٔ>9!2 >B`ϟ_[Yå3/G_:4HX!+,'0 AENEVe0e[ /ԷyR{)<,x!(&/f)ɄY`26\ZU mi5TokQ1ξe',M[s.I0 G4PxT@KNezGcԆ [Yp܄pF* AsٻU7!3CEsey԰&#%;O:H*U+2JO6>VUPPKSqNɣMAR:S󀻾˙!Bk!@aS"<=Pb69J:-0?otƙcE?lJgV Lfj\ztå6ԬOS\y7?噩#x$Gx׋9yKc11^i#aheZ kk 78m.iBz,z>"M ` (,j_'}:=lJ+zӱUxY˗#gLZ!rKz2gz+iX罻hXCXY*KoxSTZ(@دIz<2)`R.ѰI$-veB2Mͽ&Ĺ;Ga…[dz&1QnN ́q$}0kER % ,9e}b޲հ}+n+h6 X.rO-2&NmsK^vw@JsI{%QK\&. @ZPGR#RS)0# oo$HWK͗}{B޿RziiV6 bcc.4| iɶBl'W3nPPmق5M]rgs˰TBbׂJbedlˑH _]mN]b9`,2rm'tRXҔUrc*#&1QDnɶQLށB]˸1a}mo<y`,l?=zZ($۝d☰]~jQTăăv0u躔 Ua\9 >8RZHaN}a-F\#TNۍVg`x6Z 3E3VFdeb g! ҡ#@z8v1tgK5"^h(I8 8sl2`C,W{m hI^ԉ )M*drm#>d-\(w뱀 ejXpJYfA;/?25y㑞q~db_ϢũJbA:aeQեQ:opl/̌aܵN#:I'!yNLb"Jd6 "6+1^$KLtE M@ƸFh u1cMahwf!f`; rfeϴΚ^8yqRL"7ih0A|=h`LyA<3R}c̼ > Љ_ ۴;0"b}%0,4h|%J` G#n)0V 1LE/ UWFv}qǥ>7Phԓ , ht$>&#PNW'] TgK3>8J~{ yӊk)r{x`hF# _ V&삵MA'6dUFJ?>/BsT;0]tj]ڮ,KA1_jCǠł{4$fXwrG*XFK0L(KJj(fV̷4T޾I'L8Ku:B#˭ah5  g{Va&5ߘ0 ojYZO!~\WZd%VwS)4Ě'L}^ #n<$xF3!?佚pS{@-Cp]da4E=Xİ՟m-ܹ>Зѐu_gJ+ /]&It3RWpԣV4وU31FE>PVaƆ{э&ͥc"ftdE}n=V$F~p֎C@VSQ:pd[{L<{k--ƖSbr?IG')P[еsEHkژyb֖in1#vf4(jC#ȖI~Tx$(%jIo`ԡMY01.}Ewޛ>K^_0zMXV3_PrY/spZLOv$ ֠ܙs]'FP=-ϜcE5aw[\  -俙tz4b|7u0ہ$g D#r×fi㛤[^0/f7QY :ngّU*x ->Eկ.qDô[޾=>?=wζn l^-}еzY7SM>$:o?`#y0ҁmXԫT:t:!8t[u֭#w(i#5Kn+=80cSk&yoͮ#7o{ ĸ@ J/04`~iG ߠp@-o92wo޾0P^c搏FVd qF U΀0DL`Ўu8(vܷ8lήmQF<YfG΅Uf͹.t=~n /:3AӅ.YϧQ?~;":Y~F}zO>v2$OLϋjC t6=tC=s_;=<>]a^yix8roR{*$t t %V*e4##[k`zUE+b94Q:ΪhE&N2:2{tBU6ƗQt[6}:ݶ=:B?ە m'(Z|F|Ƴl02%e^?t{SXU|~\/EwoTv|z;;+=/tn Oпԍ E4檇H/j`N=Пu9̼8\1cxϫL g?|eq '_ Yy2ތ\JJȼyvȊ&M&G{&vc3![Jq\ONgscE?Ər37A%hXĜpqv1"s m2F;JV!JO0玚 RGj́>ղpgBN-\=Wh s{gM}4}]}c%b.ԝ +"WESJ JlG^3n|հB԰bk~osۉ\c('{.M@͹$ ܬ/إeEsu{Dsj֏a,ӅM]l翲| T]NO?;8>;|uQH˳pm޼:a~yv/ǿ^>>hVڼcn#z!DZrvOQ: sj=_:$bHoE?}(@n<4 Ҥ/>"PSJO{M{  XJwHKLp!tfׄFrƥ o5KU:Rm/K}S:KVz\MF+ ^qr~MlsRn K5eĞ* ^v"I-S&s4PDArZOzUvpvु"QLA:V}EHIHQJ fz7dB:tdHғ*$`dԥHaV՞es|JRCUN854ӄj)޹+tzdVbc2I8!y>q У09`xLSi.r NW'8RuTvc}ƴJ9-"X7/#&P˩ʒIKDj;!5SD"i )kPy SdYǤŢ+YhFq,2Bu  qH k hpYZh*ȤCwlڜwPˇbh3/(K[]E,syz6L@ Ʀ?W21Bf^pU7F.Naz `'#Ҷc=8D峷cAeG1 X=X^1/cc`_dMVHPGAR}Ptd9+]Ur!Ng1/9x\O5a+B P!d^R) Qې n(a" -)z U|tA,K ʉ|T᲻tmҕ(a85[,ݠ>ga ]q}MU@Ĩ`G`&]Љ4TΏ#iWbզ9A$be)Y1|[3=3 @ڂfne~jN`Yd³Yo3Մj&I1G$Z :Jb#AhN_9 #@ 'BB|A֛3/, ȵ^j23Z?{WFn_e~Xؾp$8g-MxƚHc{#[ԒMճ H6VwbX=:b@a&&"UUJ '(# Z~17Oyp 7;'Ȁ[/+ӎta"<3"IU~x4QT24 lipUl,tYkwé&RzQmxE벒SI!TRFN2Lt:˺U R!ZzQ-%D֑Vr?i+д&(Kg0^w*a`H ZHU1)KT3Zk./RBYnUV|%޵q p9GPcPxjs[@^.]xv ta1sBH2-F@7 Jg{;]pq,PD9RbbGI-fnKvJz*@{n&,}8*2`k- >NfD^\כHR (;GY焄wT ߨ@e\ݨƂq,BC}^;Àa}gL6BC}[љhrچ mţNN< ~wJR5()gfL5#s>_$B,[qگ;)uo_(bCaz,hO|X>)C3?Zznqn700=OmB'l0Ex)qC$V/4j)ݷ̗=dAhzjWHWy2_޹=ͺ}ܢq}.KU Łʲ.Q0[BKI bZm'8Ah1^$ͫ' 0gyl.#7O^1n 4F*j^^,~_>/o0[m2e F$7aM~`Ç(8>,?&-4#?;8 Sξ/#\Y 1=,=-E0V|xtvRIK?FsԲgwV)PDi&yfV-Z5q Yg!Re|&c-(2ՙjUz#tneHVgp䄄2q\9Q`(܈PH^7j`>'8sႹĜ[g@92s4x2<}&<^sWVsiV +όߑ`U9BѸlljBSǬokZ߂SFΛ4Aoōq(JL2Ԓ%OLC\TJbժ"1]IBQ(eMj'5k^VqTs@-S'f64wWlwo~/7N75&VNm^m~6 _@4%3cG=w=Fjy>("x.3&qK[! ր%pPSD2Vj8ՊPhy*oG@O5%;_do 00NyOYl i;ȷmWBIiS\-~x:+aov ͬn?3NwwǝإVT]"zgxB7^{|YgįV|w0A%eBX靅Zb0"ξoab:X哀%=׳ ! ZG],u`}Ή(%|gd qRuH{⼓{F-$uܤ~vHr^>䇵;$Bf=BKY!w.yQxE񛨳 yVJ[ WRܫ + ǀa=A O&$d@TO~ h`nl?Ut 0OA9a:i `Ȉ""/Ǜ<n>Nj\驰 0"g8MzD";Mw(g`B"~lY $TX8nצwNaXs #)Ѿx-:І\zŚjG.txN@/x./nXH!re?s/] 'K-E"Byx۝}ARh_nJ(C. _ K\nMǐ_|ӭ'zb|`1\FC.\Jri#-1DۨǪ{-=9耸p-qy{Yx.f~?f}m|쿪%c(PF.t[VD%T tI`J5[3FZ(k׋R_7"fſT$^BQPtj.J)mZVdCz aGi 7 r&44B8}~o]Y;fAVKtCxJE&|eҙLm/65o-o׹RW}f~hHRfL!$ncqUVmRQm^Ҁ=c9/߽| g8&HUWH\Ж}M`V+@a\ZJ>Ԃk֬{bUH<@ݮ)Tk+wO-Z[A Zi\"A5H{ V%JIQ0aTg!X5a Q2^Wl)t*k΅n>F~  u3.ke%BI~|B4i]U%o!lA@U*f 0nމ$^@@( 0H&(vl pN٨{0 Oh)p3@D4N-JU!tiU,v6Qr|n[pU@D }̤J3]cG͎l^`2. kno co37Ei7R)wW̘ S*!h^'P[)DNKC!k&MM,-! ć?|h{Vw^ |cҐgLX>.%R~fKxqg_? b_áq5}80#En㏂S^jt''IhݕÔx|Di:{+hJ}09$%:=Ž̥$+R ՟SwސIP9< o4\G"/k׽+g^n kYw-VFBFT|W=μv DtbGz}؁-팷brD-~FvK!!\Ddj6eWB3jX BD'}gjEtLQfWه a?o1_jRÑIϖa`1jxzVv<Fp6~ YACZRXRcLh 3E88dфm'ϼ kݜO%iN ."ʫSTwNػB[*AJ~Ll8LFO,\r,W_J%Y nlE2/X,@ٰeNB:O)%D*VҠU-@Ԩֲbu]V5ڐWu6\VZfIb`.u Sv`E;L 15^)Mx)G1I9n SL ]9jぅ8A@2%1"4]7h_8S\Y~rPRBu#lF<ӄr ҋ~ƹHş%Δl;T}*WR%CyQX&X6#Z F.`'pqg?c+$ؔ.bja_I#I_fuìqL5rX̋gu5Hb Py} h4jA,Ь2++++B%7 a("#}mH׎gx9/#˒/>|Z$6[e)} *}\V˸EmJxąD m0GP,NDed;,"JFIYFQX  +j/fnu r=KE Іsfp0A,S-b@J I߆r7~y|,Z ֹ9 u4ݩ*0G1 9$R455A`Q? $i@I$'y.1 0ZNJ*iWn *FX# ädr Pwi./f 2n;naI^~ѫݬ= =q6\ɱ cX Y6{`i>}_]|`S7% f2ULs7?'߃a4B>dC\5Y#rc]p}O]riT_W.vf#2 j C1>VaTzsAU9i2_t\&ጬhZNb$^%kSZ.FaDc&)ߡWbN[o3PbX";"0̼" oF1A+&s 6T3i,4SH\֊࿐U! ZM-MOПBL!/jÅ9фgE[g/._|+'^3^7KK%Sp,&]>+P:/y,F{엇G0)Ћ r`x:.M͙)"_n?V|hfQ'g,/-,;4{He/^U r+,"_x WwbmFNu+5kj&\&!rAeVq ƓXP]a׃ bҰvzX|{+벍15o#߆|~.ng.c:Mnxw>V Ks葝 |,>;/ 9VE5 4|\f ~~45{}%:fdE5Xp/3J 49c^#xKÈF8ߚDuw@D#tb; #cXẍgбŸ6H eSδBrN-huiGMEQAR]6$R3K&:&a\3N NyeFH..YlȨDZSz9.QC_GJAҵ`mݻN {u>2*ۿYuw(WuOh &*lH wwy}`K?Dz6MK^_#/Z]z _|Uaņ|G0 Xݤ@-n Np{]ڴQ?*t` G䌹  %V"%";uB{ʐZ NC8H (ǂR)0ao8sOdإ]$-ʔS_h=Ɛe ~z J{AJJ;RT<Uo=l:F-rܶ|Sa mYU%KwWGwuEeG>-EG=zE1krv߭VkϾ51!!~v^t א7xsvm`A}(gZkuٙbo5GG} O꜄9u 2gdL,3eaÔ-&)c̨JK]ȳ ]V` cqEaKl;{SvB+}vЮP+ώ"ҜYkPfF ޷S NB]Euj):>8<<(bj)YJO\Jep&[y_r#|r4o3pzJ7so#>>tѯz3,i<,؛O^>y>hBbq6)xٴK:RZw;6L\B*lPt{95m"tG*{Yikpg\/k3im "'!WB*DG l|%nowej_{|]UG8?1X]cɅWo75 3Xʟ?Vqȟ#E'B ނba {ђ66 YH$P ;GAu:Ux%*WNӭ's__=6&ocfӦm/Ъ6EbcvEf؏L'9@ǻn:ﰿz6x}W\íw_Xʸq+H _y2:iHR GA,&#m8-~LFMb^᜜ݟ.oL-dLnx=SSѠs.Vkk >Ns`6Y䶵90p$pxƸ"+hs2َ@5-Iƨ9PzPU" 13wۨ%/gۓƛ4+f2UQ$k3hl-"fVkĨd&Q)mCIԂ~j'x%{PX=YX z9y0b1Cd%QQK6c`DH`-0ylm[ϫ/etym 9C Υ ԯQ9n44rdBٕxM+P/oCx0cE,* :]MJy;\2Y-<݉ZEIS>"`Q9(LNGc*,lhYH()iZ%M{9\bk s$NBEkA JA`(J&u 6GggQ$Y[ǝVP_ D[t!B&MP1!>SJT|I[ii&N~XbZA#K+ƑI%\Ԙr*m 4Y=!*BL&S?@6 U0A9Tt%:mt@G8&u:B 4^3,:i]8Fo8bR4N/vcCUmµ)$ieF\HO;E'>ɆƚPkB.c[@cMhKųηZF+&p?.7v&ab\>o?ņmq f("پGÌHNAg*؁Z.Q 2=2ž_hLX m5嘬W5K:wh1qOo+{o?"9Ѫ}=Slփ= w\{H־1Q {J,Ǟ~,ulѮCḰ^\A+v v6Gl~rfL"Jw^߁tsbgAxJoaJ3yvc'rHD˲B;҇La&%Z])*:Ö}W;o3ָӎoaa=2Ն,ӉYb''ЁM.i#Oe%&f| 0K:Xic4%c-#|!)CR-{8@j#7?l#Ov)\'y]`)U9y@+!%ꢧC[7=t;}okz͚kN'/|t#ϵ -fu-V4OW~;jVKExfDm%D?d >b"rƧ(!! ##>E2gtPSENP7O\wjSTݟu|2;>[-pvǧq#62.^]=}tVBƬ(Ѱ}B[2|Zn?wUp|F oXZQ͠+&z6qn s9r۹/r ?r$4p,$D>DpbW,<=2>iW2;{RvkbRٙ03N(Y9I%%ŭKX+!T?9ñ2bgR*:Nw]dhm(ia(3ڝb#_ᰃrHգde?zx%&Ot`ڮԝmV)|&xљ+f1܇6j{i*򌛊Wر>W{xYq_oF ¥ϛKM#Vfǥ/B"'dplA1A@QX?%)00XUtYFJE;kb09F"vs Ady31/ES4SBVAyY @騍NLMʘ$ۿIިj $/H.LK>5-3hmA[Q%$bH].zޑGJɢtVEyihȢzYliĀ5b6#7ɠ.=ʐ^Gj:jVVVOuԊIP0G11O3i"V<2^!ٰيgdJzoalJ!UYiR"zc3c,?khޭgnDAN܍)@0k!sfֶcgǍ|ed"db)gF^u%^X0 aYȜvx/ ,05SZDymo"hBYRd[˱OEEiw%N<|t*Yl-pR$ Z~Ji ":wɪ Լpbl`my^|QD1AhAI?$,ET- ͠ZyTP~ApX`*P:2o'#L ,VPe3Kш <{cLh&t2IFHM7 rYJR lA_`"J1E3奉Elm03JjbdJGaaSВ E }vlړ{P> &޹F>$bQzs*WPT%6vYt\`E2VjC@JYb bRQ( o\|=boh2žb-jp \}a={қMTvS5ǧᕍx @H91kMϻMj:uBt`…d%@1DЁ޵q$bbk- y:y;j+-"o5)Y#z8CJTWu׭r.J,9O0lT ,lm8L /.K0rɥ?~x2_{sԢt4B${BqD3֢O iƜD$=ƺezwHu_2]-gi#?~C3>j)zqO#HIcP-fI{p5ZWcG}8Փj%/>?H6,7-v zpjkCjzIԩzFxD1iջopz>׽ϟ7)5ON ۾i; d㷼 N%!ogb3kԶ8,V@h%$:%*՝?:pwRK+>M*`` WUO]t>Oj# Jno.][dLT!CFiqΏ9rHPZ%JƔHXhAe hL{8aH('_XBNU&5=㚯wݘA8 e$}ᇇ2= _ & mW+sv{OR-Ÿ'wMEo<\\_2O#0~9:Oޞ-<_:H3h''5 wXS/sF w-;l)l'#&kbc eI.1%# [,t,_g?ǐM>d,l/Gы=H ٜ8e x|BIMa−wmB0)"F]+]vou.BAm-e.[FVv=d `jSk]YwWC,H)3nakCSߏ *DFt$xΪΐ^%ɷ0~هmT^ES O<4MSxbJCJfy'F\n.Tz#b#*bfbQ#KesIʫiQ3SŶ,N=T-k{C+鶆 N&ޢCpa@hH;!rʒ0߲^ U#3n؆+HEq=!)NN%}kP2!*ČX7.AYRR2{26iNȿ0bmZ PSBN7UTX eM;I2¶$I#4iIX(Rv!GȤk\n& {+뒯.aY)ڣOJVZǛѿeW0F87懛_je1ΗC|_q%svkh:!W^y}\z63g@ޯ_}W 3I^:\c;s*>7WLJsNlxlAgJ9(ݏm['`u 9DWӥ{Z< M} D ;#{"6Vhe391Iw[ܖX\݈Wm}W7ZRm}&Vi}6ЪܟX\oEWU3-:]Hp<2dOB n[_-񋢟ū8bIO?a3h2ޡβ^(qᎤ֎!W.?pUK :]!J v[q~}׭'$*x'#{ |"d84p`[66H0u7ͱn7ɖDx.ROԤ\KF^GKW(d-8,M$x7F>⌧k% XH2E0Tk GJ`t4aT GSV$̌jI-[sK2075dprHh)kǶ8hMw:t[6.?c(͸@d>OMvdr5є`աAF4Sav?4!Z#xm.pz7BWO 5$[BWdL:Hڠ39Q7حA̱B}[Nz7TffǢc.N3.( -#6}wF|."02kư۞7BT!ad鄊ѥT;_gR"EA) Ի<*6x>UT <FPMsO^Yų18IlX3V><Ǜa6Hyl6f[R`J*)񢖠xY*AFjv_;C@#6y/dCȂ5A\f S}m1':Սqؚ2Ɖm~N#$۞c W)ڙNؑǼtlʪ)NOErK{͔"\,v6X Ȯ&ZF$kjN $.߬Smg_sTsNLhNj-FYPqEhQ#J0)9`:TYCuGYfEp|][]F}1Cw3@ ԾƣaBAS宋;6zJI˧emˇ'ONZ4.߬Sm'U*UJN8gSr"RA]B1`t*nb,+5 , u U|Ie!T@0[FآtcAƪOB:VI(dc5\V)@ :$%+W)4R34`=RrQL :tDJG)HbeTĤQ:*0j2Qc]l-}􎊴<"sTK-侼*Ɯw7!-{iEMWމsoAD<͋V*{mJټV8c$}#!_N 2ًlMQZiH'ț*(.hR1* NF$TlT3vdaķzX!Lt#L h$CB_1$hbmKd'2Q2Ht$cب$0bq5lH%mƈ~ju+F2{K [b¥".5NJ4h0bJ@h>dbڦ"v%H^ Fuxe.!7cM$-YzOS)q)MӼkm),f6CMxmzsXGF{x,X~>O&Su>׽ϟ7m)95OΉ^+fhm.54rí+2&F&QƒKtU~st"fWLXlG,' 9;>`h< }H.R]?sH[6kwbK߾Ǣ-o_Orh?x@ٴ[?t4sh<:D `kc&õk~FYMVy#R; .ȓzfzؑ_Gd, 5-[䑬=vjEb7{Fpc[nw׍UE+<')BL+ԖJH":snOrZPj3ǵdSWg% j54*m(,".͇)U[zV(.ͤ=/aPvEZ6 FО |s&R#xE텿\>n.d!X&=gd5Sн@|-+Ȧ,ٰQs J;\HepnJڶ6p* =YL衡|7zXмt nL;W%_߮?%]J5cpRv~O}#xmSX|VF#xc5|Ws\c$ ffu0s2tM8 6N¦UT_C,T 99'q&qT'y& ;_p9X_;/`VZ=$ܟ+ڇغx}eO| 6.V-9VU;{}bxs؏}뫛enoˋyuؚѿ| hr܃/tn#`saӉHQ(f˧P'(ZEoZ졂;XĦ,Rmʱ>Tʱr|Evm)F2iw]ӥF_ۊ uZ&`g}=w)2KLR#5)(,8ppsz"0m OƁ۸al2@Bޫ6 +:+t;Bڊ(%s$%F)XkorO$P[%'B( TF݋%V\ 0qljW\rJ.ʼnV=#!$R_(b}yknzEmf%}/՚Ybk`ܹjT%p=g~7咆7û7ny~y7>_Eoo3_/2\jQkB{yM[|eM̿ď`9A5jT&?~!R.pJDZZI,p 94ShkYVMBhʧll>Ed($bCb5Vnh }Q>>|1_) |Rs ZX?n4k&3RkƒQ4kD"OzX5>TQ)NZiLN #SV`ѵP) 5 LJf}0X&F{0E<"Z=B9IZ;YL!_JlEI 8VzolL)ѥ|>*;pY! IGi/<88r\`D6V1na@ S 11TT- j)E4ǝܶ#Q /H2^Qi,%nhvuv~<\Ҧ8Z.%ŏgW9} lHbJ῕Sǔ[U'N(0̋BV3)  CY?i.rĜS kթ̩oP.Z1 6W/\Q|f],}P|z G6+s^B{)VsWÜ gX8Uizi=Ǐlz,z< kG._\㍐P4M^Yt" { [^xrLՌt׫o ?~ۚXR:vq}{tjTv 1֝SK#0U'e|`-.fպERb>}d7ʈqC#rExr@`w!Ol^Y]S`"ED*b1p%f3]^z޺$Յ p1;r16hpPxXral,jygbd6E<^Eb|DUh݆`wg)Yi3sf?n|y:[3`Kꅹ?腔vg[j.Nu{f5byƫS}`bĮ#tZ'Jj;tEMaÇ>TO8-݅@t]FiMyo+skqW$J8VϞ\)ήk@fI&BnYɄctg|]No z}T82x Mʦ$?ȷ ?2(1onfNզnƅm Mʦ=sǻQ|ޭV%S>혻ڪټ[}7^wB~sݲRPX 5\\ ZvT)hB~aZVל-tj$zXмe0#qVQG}VD'Ax{>FIZ| }z̡"'ūů˶ 7m[l~>.͇">w~\-cY8[AZɋ窅_!࿄cB3>[A/~T~Wb=́o(xG 񝂤~"7L&򋫼ߞjY;/=ev-ͧ۾E8ʞMʶ;ⲭӛ`SSpi֚ʶhT MA> ]m~JUkvG֓R 鯣7i>zi O{ž7Sm AL89rB?dִ_Z?%iuϥoI48&E--83LL4UU-zTAv³sj5N6qfμcԌ=uN.0(S^C *[ˣFnshELrk5C.ZT ȕp)!.)M`ua.?ۇۭjc׭*ȵ0/X\ `4O@˙`9 &_2kmCy1yMIDDc-KJ| \'%C_I;n\PvbTv@t=M4"k'_zȀ@Ow┼P'N! )D +ܻ0汩ۖqN N?YAa#oQcHBʎa"ƉxTK{dF-b|Eޣ=d\H`/(AS(TlwTOj9WX,B p?kfVR7HA/] y!ll{ak ae %9 wYUK"Zc=dgAq|89& b 5:;w]Ii^kOeӂ}.wˣQ:ҹ_M T5S:Vؚ) QD|cUUAt~wmRFQoAO7;K6w7$HJD҃#Y-/4 & 54g9۷>"`*ގ' 㙝< 6_~ &:teiq:%CDz%&d'n]DNwk/u M{UFӫ9U (~Z,E#Bmq@E?$?dHK?sQ,L .R _ٌ!ȎI[|{1U+qPϑipc'zְ\MvN֞vmb%!ef@֌T2"i`sp$[¼kjD{qQ.QJUE$j_h?̼YZ_qq߉RAـ1g(h/ׇ٘Y]ΛkxnMgĹⵛ B+i޾[++Rb^4Qёo~EXޣ} !1J۰2Mpa+_+ n^>F0d<ޭR}ݿ$u%gπLm0Y]%I Lh*PiBE9@P;9o8XXͣ쳙%) KQ.H\9[i'(N*O8:PD`=ШKAsǐJE H?H\΢g't+rJC3~B)'\j{22)߄dr۶sq[V$-Bl[~KXHDJ\޹1'"ݝ/Pșgֶ *dOj y$9mT7ǬP8!9:`49 &%ۻP*R7]g:E_e1Qd=am&fz/KsP DD+ -TIY 9B;#۝7dc ӈrOv`HN VH/E _) A+]@, ǜ6]; P*%NUVS1 yZás;AnEC_kԠue4+]i1Ă2T|ߟrUJ3~-W) JFxߜj{{w^PWtg;9ۻ^PGͱ~ԕR0Iny? 8+"(2X41g A29!ْݔk:[($%,p b! aAFP0QFEr!ު\XG:#:. Q0E KNBx)v+Ш \PTC%86xpK)<`M%>bp \>AF=>7T RLMHqNKVMw f~+{dvRH&=k\y@`{bM MH7o%5JJaJe*KH*G&F9'6RadfDE t6dK0 4z#pc\Ω@ #ytv(%P b2DT"8\H@h)({i ޳,8aڻhG??^Q멀a{_|" 嫭o[I 4jfAOo36 km85>3u:_R1C\Q7yX||YMՊti$zHZqy_|濚ϊ+\>R,9 3A?,*;U񻳻(*,M}1.V!\t`":EmH_WտY-ꉝ--2j-Iu6 ӛgvgГ EyϳQ&i-x?>?Ekȳ.L7+ ]/Uy5%m/ik~inlzW 902qK Dqi @Qoϵsyui6cny뇮.>ח X<7}okhZ S{C08O02 /9ծjFL9I1H㍂QbsjJ4CcMӔ#,Εᔥ * Ϫ`5Mv`9) 1PxoTL!,/XTBr`|}0 H[ X>jx)!3/'5hY/ʬ͸d+KE@zl;cٻcBLWH CzR Kx4܄"1QOL (\%tEMLIJāhRR$N/ %B1.muaq|i+ 0ULd̡UNob/BO疽f`I0ݿwgFPR)BiŌW`x~A3LA&0^VVZג"O_BY|{ܥ$;`4j v+ޛcyb$3i: ꋯHE̴eeΦՇ.\X3<3ڊ3 Tr/6c\#ʛ(T>hbݶǟt"ᑟhYF7@A&h c8Z<|,xs'_XiK :,F1e!p [8\K0= 4J9lSAx<,wt>7%`|8D;/@!rcBp{O,ByalvmdwY0)@`~P-$R ;wnnO RᓆQ 9&s~d`bhsCk[S-i\vNމ\F' G2:%3^ݹO{T7"DvH3*Vu>^;hQxꬵsw7nNUG2j7^4Uz5*:ٜI\bPEˡBm^|MZjǧӷ\=&~P2M Ηh5%QsuPƍ)'n]et~ĺL:|4N)ӺB%iM)uuu*%WS)Gn͵n}|N!L')CܘzlSvg~^܇o^GF%ht4{F }"`֓@ş>U.%ukEbq&r𕒠>,zT\,WF*g><̓UXܹ݅>vj08n<(T @l#r+Rɹ ѽo~5`HBeBMDv9F>Sy@6I5kY,K1 x(Q{%#fhÎHQp94 5k)= i{+ BPMDDLTC?dm;cݕ)x0~ǣ2wQƹBh=!+dkKІp9L9GSAF>䈓[ 1q$,X<[$*$Z=J}yN1jW0IOQ]n]sO%Dc cJ@v>iP)){`UoNܨBHA]3 LCjksw݁nاqNwK#Јf/ TS0Bߤ[D5K;o`2!-wj4禤ۄ㗙3wÏRϧ_o}+QiK~'-Q #p@%R{Mg P:;3(b?>&SZ')z}֞}jl(7 H~y65C9;bjOP#o)={bcMfNiJԪzh#A?,*؉,>Vu|o}oSsl_SoVN :()V2}Z0SC}*#w ;EwHrmoŗa13n 2I>af d+ɞ$ǖ4vK nR b=dHS][bJo?,8MVmW*UfmnJ`(r.vmۅbSç uF#y$`#ײ`{ڠ#|c]Sfp׭ ?+Xd?s%^{7@3 $jO]`P>2霢!B5G *J&fLxt%XHJ+D0Fq)@a!6!lh\DD iZ(WFHhmBu FyN (]@N%%WFq"T5ے>]?v՟݅$ŒDf\):/ 2>Tm&‡nm{ulN,zўRA]BAfev~;5+AFTSaA\T, iFnhkT]5 t|EptfB=dX\pBEs"d<_GnA~mSԽb񪍞acʁ~+љzzM-r["]~+1: *Ń64]E@J6?Lt`\tѶ )MׅTKLv)9C14Du3{ ߎtH( JkݻVޣ'7{9ܾ|HrJt úUtPRiH<1;; BӈcA rf@R"Yxn,mNݑK"UŀE8/3VUW +J%)E v;?Us`E/-QuWuqsu7 X3<;RfnWRQ=Dm !H9/x0*DWX:ic+h$ /)(LRaǨgOK81\faHdªû;o7\ƮKP>Ti BBSCҀmAUBE!LUU*kTJUܼhl4yPi*Rժ* f rL4 ׷TbhQ!!i:xa$%R a#0k\ KE?H+`)WmqDwg c,]5R0ќR|I\;gݾhSv-RX-~A#{_Y͹y#&ue5pt ʻ8-E[e9ۮ{=~*`\Ե2`;,J4P-O$O݁;Fh̔3$rOݩG U6hg>8?7`M,pzؕ#Ə2P$NH[#߆Ќ/t2%tOn+|$9%? |pDj H<.ra Rw}Nx^{צs Bʛ|NQ\?&NL\vHۢs Ki޺T3VmP\{WpGRb^MGF #aY~%RBF%UpY+0:NRKǪ*>0%2GPO^(`HAXc:uG* RBAGX~2K݇o%)w NB gB ﯄'%(n]3AXצ^e>q~s1Ȯ'n-BA:AAI\,zr}ԫPRnluI"+vC&Qv}@{*xs"DvnHrSCE ±Wbȑ \:@PwXlr;s ?WL!Ux**UKL%JRyBPX^b @RV% :01RT`XI溲HdP0ϖvK]aRe1uUmAZFSmg[7xKo"!; |Yѝ 3D+"A5k>=vk0ǔ EɆ@"D|_h~{=9q/ڃUZ!$La2mݞBbv4p |Ջ6iwgvBp$pt(Xo0,kپX?!_zMfffY|tqkgUV,+e6vW\kSK0qx;֓+h:{7}x , v=o*=?`X\}g֕],]_c"I? ʑ`w=ŎܺXڋ\˦UO?nZ(Y|U%Ommn)94ϕ2RhP2+@U <:z9ԃ~ΖhɿZ#s6XYsa ¶_֝?WU/ӻ$4*K-}~&Y[X0l쏖 seiZP/ UTQ6 rfRh )DY҅]B@+L0179QWwFIfj5 x\5\ym0{0{wUg\Ş{bEYڭįU0Ѧϕ(̺!Z[X0G,OX,bO3 X-_ih~Zv*w;iXFk䘹\ct,2,xq#N=3nTu?W8Aj[/Sy^ϥc'vvz~кv_O V<4S,3\z't{sugQ~uk68؁ cTN%si@䕜Bm.M]a6!BX*@}7G>}S#9e04'wCzu;jiy~,} _%g ؇IjX=`BRQ"ijǪ'ci:6 K EL%HKbXIU`zMԒ"" $uF\ewjOP8/5h$UNXæȍ$>,>)P9(G" 2zau,j懇d|ofzwr\ =lsDd.=]mjQ?,@J> ǣGpʦۖBz%>OS?OXyRy!Ceエ8Jdxg )H/E4W E{.&6;Ǥ E-ǂ.N\6؂qT3 & de|t }ɠL 0;Ƚ;̖4> 3A(A)~n/އߊaqbcL춏'fw^wa-l5E4fn.k\Z9퍋xg9#L$ur'c8q)E;ѱ<$Ldt Tyj:: `qjZiZY0|ҋ[#N0YPMA{M5+r8m I5yP9렝(Au! :4l9 =.Tǘ_.>#u8n PLTQqV M8R)A5ۛ4ɨAFk- h0-1`2Zi*mJ5̹NOo2]R F?gݛoekݻVkq\9@:NB#Lv:Md߁3K\Rh\7'Oͭu`dKnjWc]X-V[. xPY9KΠMӼdd_]_.XOL'Q>6K~6z,]`+C[1NFTNw rZ *_?Kg;7t'ɵ1O;L3yl%/47!FsMǖ~yt`'3U_v5aST8J]V(vD,3ͩ4 Pt z;FZjL];}辏0Q<Ƀ2^/K%ǐNȴ"*^6;J%nF AS%0lD@58@0 n2~{@vxFŬC N1v_bk 5l?fFyOzϣ5(TQs1D8t \{%CP*[mpTnrFAn~roW:ˈk)u]l(`LsRD\V)GFN^D0WKL,!XCkmFEΞx+^ i =ه`ke=WOSlIvKlG 㱨bUXElrj rȸWRϝr^'WP ɂz__C;bf*)$|J~c=Ȗۅ|~iK6LƹV_6a@xć/>}1u%'}O> d6_ay{;ǾD ?!u 8I[;_yj T!9LBg?=!(,y0B+JڅЍx=/;n8,bq#5)ݥ9:hG?gvli3Vx@JKf5+Vn,m:xeͦ6#7[! "Z%7aILohih @[oM-eZ> Ј͂d !;Lx Ks$WaЀHS hgp+?/~cyM%'%uNJ'*&jCSEgH#Mi o64`:F*o{1s2DWZ%uN()Zӊg 0^gZ c$ͼV807JQ8()GTxգ }2q~G)> AWmV\]G 6CW2 G h*R3̤JӖod NoŇi/IH&TھX[HD Qu7kxV( W:kym7n//r9tX,x!F]^WeS몌QchXn%D'd:],UnW$\)v-JBԜz6w)D#5+m[pX'wWj,YdkO0#7|O?`Ť$|R\A|'~0?y8& dRY$5h9&#h)r8~$_=2B{B:Y|*i"xEW=C[d>JօĻ\.ny^6PJ{g?sū`qG h$X\ QEo;9ߎ}h݁dS8Q 2S#K+oS}t`Uҥ-uv;UBrAy*Ud(=mÔ{^ սV%H í k'\[7S^z*KvH]R4Ygz#a_j*mIBŔU=i-( } dxf!nBJ8tz˚)F݉\j'*rʻcC+])6܄|l>2JCU%Pn%Zwq48=OǕk@ki~Q7 6t7o- e,ɉ {<1D#r-ɁwlNs"S!{ bSksT76cM(ݓ%m"i:lO\O(#{~s!:D{%_yӷbݯW=ֆ'@~AWumæ$G j͙h _𨻖ϻoOj:fxu1/9UZIAw9p1S:li/Ww=JJ4jFp?!׌7ώmJ)rWrIԃH`$)5R2 : &٦e3?)q ː}:{"9o2e$r$.\5dS[4b)F,%7dJLh+I. P ~ΜPSJޱlsn^)܆\NB/II pj`:ZV۝ՑgDNz!Spϊbﳍv2LbLU>+gTi&< Z0cz%xdC5RfάĻ$Tm)I͈RmU љPBfc B9r)GXi@RL :sC,'HI\HOT^㖓VNo'w {"U8-_pB7tf${p_WG [OmwFL)n'7#Nn3?.{@f)S#b7.n~^ 2I w?QC3\p!y95# -L$f_Zt`/Uԩ rD4OWJDΜEu8UusI*hR>Ϯ}~vp̰ h:H (Uծ$f^͑D@Rx?}->t@B66U`uVQ@xA-D޿\-VU0-V}KGFWnWb*Cٔ>X/滛ncQͦXCbjB^%ԁ.[mFW^d4T 1:newж]ѲPH=Xpw5[,N!WNuLO03`j<M } ,sۮ[}Gm=0i5 %~TFGd+NSVD! nz9Lc.鴺Ү-EM34'X",sTN`Z.Ts=Pu1qxc&>,w|C1-lto7);,jmE{oEEH(KYf0Iѳ=%- Y=cj%${uS=_ IV 5>9wڱdӭw ShۥG Y)sY GPݩHՈ0J;3ӾX3JD:;'М:,qQr'0,NGTD'RHHH%Χyq+JM3)8 &3aJEE*K)o-T;B!FStqkTIW%#(gg((ISUb$ètTrҜ?J53%Hw3F(kzN.p7@eQslr.gh¤~0BiE۫RM^l.j06Dp<"znKI_<u׌˦h/t٢*tiا߄њRjܔC*wwu؅Vd 'I>}BBkpvF @bp5MYV-A] ~, B=4`/ކ;_zf0j 0J[*bd+ƕ,u:srsi^43453O!g+٪l(ڜl#Ŷcv(9֜"BxQjwR{&SνFlin5іq{hȃ̀ͥ>hWYL7⑂hnihaFWDh*ZD?e<+K^G#[g>8 a{ԺAܮw5P׿WTEL9֨a~}F1ST-!jt0'^@_ٿSBhHvPQ t/b( ) 1 8Q!m'2pAB\fh <3gf907X n<47(j"0mƋO !JPFJfVNŜ!o, hqQ|w񪍳7יώ*0aP=LlR|O?uDLb܈kdu&,^̯d9y'r;]`2/sG+/wgyxn9n :(:^ >{*4! ˻@JXcIZ({ʪCJ_SF"q?y=y$%Nqi(8'EMOQ&S= 8B3F qf:586ΰ"8CD yd0y?z9l=pMJ2%N0mB)p4z|񃝅Ac0^xџVAWTSTepjI%}dy(J[xrW9ǶD<+fn^Wd3'L%d$XE`.lxaL0DH#MO4e6-7'hhdEmG5 9Ck3C_p%g${p_?jp;ޢwȍz18A6 zY.{f)S#VFo]G& G;@1XC̵X4Oie!^ҵ]v`//UOöq99 ymw\@ʆٕ* 3q~-"Ff4X\mcF7. #UO7 11*EǯC[~x^eJCT눥8f7[Bu srٻ'm$rM(o=3ݳ+|AY3cgkHDi$ )iL?8Dݿn4F7"05vf1pѣ)v=B ,pmR%S)9'G8QRGO~ T9bBi^y{!pt1Vbgi#g] G(\{H #N?}:j9A*/p+ډi:ܿʑn G|f7:qꏇ'63oz/[׷i\Ai:`whh<ma+ [YK!F䇏3X=yMR:"&I/ibOc 4 QI;AH>Q[hTȓbV:c#`H #P)KTkLXl6%m^g*0+kNilo`Ǩo7%w{w S9 +O<'!+ϣR|l0!E8W,&7q[\1şFAYb9H.`}xu#ύh_˲`Ӕl'6w>C.wo<5Z>[)FyTnZNQ';g[lKQ~Yf^tͻr{_|<JaЀPJzb(PT*TzV"onK7?"g7CgOZVmCq˧_) !{IMj%_j@/!^C-R݃TD)=sHi^j:Q;=2L:/ߐSIj g>y|z7X>3|㽦Ld@0f$ycP_Yb|bw ?6Gu߱miL"yC,NphCopo˪r5*3AtV|}gD]D5!߹).~~jG`b:uQƺ2-flۺώ nMhwutJbr&ehb:uQƺs!uտhݚА\Et*(ii}Pd6ǯI`!>I.3I0ӞQ-;INQU$2Ri`M̌jEp_3^#v#3h&>WzFf1{N(џ0ijo eP(X)/_- \hz3D,Q|1bỹ!=PʳT$K$)uD,Ih!%v ssCIpSq$pAR_ 5ߎ.wg/-Yw1[yfĤh %Seפ#: Dd9ڷK#3eVR*Y|:]'V(Ishv|bAE'%49qdQ8VzqD%J&shn`ݦMm)!:˨>PzT UXHH?2Gbp&`\Dp4շ{TcnkRkNlR(NF$1 (JP<**h\iBR"e:1θ&T;| GII kfCXM4ԦuN'K넥Q)} +NK) s$*fmv|L[l]Ş{RZR/o9e r1>|}.AP˾_N8c3c(,KI#j&L ܜ&tX3 gQ/"\iX"Z(pU+ RrRaOZ N#nfR'X崓(9y CqbF31D|:s|n]F(շT [W=h`fR aZQ-E^@?e~ !᧰@?%ZpsYKk4շTK uk)0-D-$ô4ZBҫRnu\bcpұM6.?28M@Ry}@Kܤ* 8&=^ŰPrM ǐ*$ TqF 3Di@2F$tMJbD^j"yeaD v F$2e (Z4IH$BpCPl)&pJ~# Vu.{R[\) PuulT5f/d6 $&3sN$!FMXl$ƍC 50LG)fcmƏƒo;W#FHR/c62˲PER)Z%I46Ҋ ,]Ik]F!jd~/Vso}򵻾rn wȾ2KyAʷ:UupyCݐ):b-PfFx37Af"Qq;#r~{3t6eXn>r -n~#)2`_fp0T _{<ߴGpNA`uh<'GܤX<-\w4_N[@loSC7YXT-镸OeT#>|0=|%QYgfilp זq-~3\zA9Ux-u!<+WHPo[o:d&Uч >z9]jjnu_-gcǷGp~=G½5Mӄkb|?giQk8|Y4p%8$ ͖j32,v++ LeΨB{eF=b)j jRԺnT}ߺ3|v);pW߅68v]W^zU{O3.vQg( uFn" ˎEQ8.*B/!\tΏ/æQa(hHm*3GUze8ႨxuRP>J꣝!!w&k 3%9]"̀de誚OͅxrsA+7}s+1WVZ%:1eJɕA.Qwʉӝa2XJ.!flQY?׹ ,1%bW j\yp NKI@`@bJp"ȇ2J8mc#X} *!#"\֤ 8bBD5H8)Hd84M@"bIW =AO5HΙԯ^2.Jl2K!D'o3}%b/s3o)nb}E s:Ix.| @fvY?ONKZb?M nW\?™'LhqC;]wWR"Qo%uTb&$JO?[KMLf+4xcHMif9\kVd0M̦[TR0yHHGs=m}42󿍃:OkDž>ԯG}|mgG$3#7¾Oy^< s6G̽,i `'캘;?I,*\kX_1kV^9`U(h'"1f%7hLn//q\+u. 0OűؽuoВy|p_/ܗ2}>RɃJ0.0Ǒ;GHFsmjSe%&zj}BjHbFL'PTBFV 4#L6SJ>{*#\+xS#s!4a;&u1cA p!%~,3Li|UȒ}U^6;7E2M4b-5l%F'DYeqkKYNncG31c(Xe1ql% VK,v:YAv`q_Tbvk])RNϜȡ AJiքx/~Z?gO[E5)^'x狿 Tm/Jty'VyLpFFܬi\9UK<+hoB իbz8[{ChJI8 ^S}I>z:RJ+GJ&IBJd $րM#pL\A +mP*+]"8t??F?|<-GdUw= zz4sZ|_JgGI)e|]wjcNScTh1ǵ2eV0LK5r*TЎ"z'Tm9,̯8n ÉE?LRHb!dańPpa= tP,:0,NQZ]rLĉcx'jYN8_}p<˙&s0eUNj\D5 YHN%DW-/d#"͇v 藶߂i|Rl5 K NϜErEga8̮a@pH K$XN$0Nya8<#bѩK0&Q~혘! a81= u-"cN3$XNR2-d9jDC8Elj8&EfD%S]w<{ݮ,ug]7ͦWB m/64$)SaT".g~ux@S*MqZ}WeOZvC͎d9PvHN{PWb)MY] ׁ.=]zJv?w5GFj/x*dC}U.\:vrIe;qmr\jZxjQ3k9@2Xk?d*EaejQ|Pb[HQZJ(ɉzX(,N(JZID2&ddf) Ϭ=Z+mj_8V [̾FQ>{w9zI&J&S]V[\EFhj&=3+C)WEIj+=ٹCU)laRh]cDf3g/̍bonl)ܵՖtA6_vE3s*}ZnjыkW^flsPElU:1w+)ڑ]S9+_u<قPD],7DK"GR++V{1"Ie{eoeb6vo4B'4iּǴ w-Xazd㎄o;bV{w4NDŕpy"%jU@/+(vFn(J9Q(IБ  5;P\Waɺ;/I*'>FݭNńB~XI{֦R7q|4lwq]Ej<}ٙ@#W~pi.YA9OQnW v>Hqs#2!}Btg+~v7owgը_I(rjh©E͈+?7F Xa_~OO_r teǴCRF{C33i2n]'\`a#l@_Q '5q]^7,|EacA tLPʸY6xwieBgZ;2W/3ӎFRڭް4\_GLUP/#rx=H=u&-Ds7ИJvdBX}X5 Dv1ϔ#iraWppT`°cj30C >6eŧ&CKS'Rhzpq  J%KQI] mdB:~( KoL296 s w H#%]xp7&a&ϕ6w:Z 53R<11R2[ڱe>7iCi9j7uArM `vZYè- q%"XG5]ԝj*m FOmy2ub=g| =gC0_8+lW3_)Ygz~e̓}ٍqRby^^e#HSO\EzX\gH>~ힵlӔʕ:SHBr)%GFj7l-!6|vÔ[|1`vK!!_Fɔ 8_k7u& $ݗ Pm LwP`EsYE 3?_#C9ۃz٣'{xD,7ODnKy3}Kh'"$_* N/7]'E߃nUG|e J' mC0EL~o&~ /"A?Q03J1qf$YEii HvƆ|no3UCʏQ7(UFla1Cp"!%>asvvu3b18P{痀|ى"[/pAWסr6_3[.We?Y>|9sy b lIp4Nc,i5+dn:)"B#)-WUEOn=?(ܠ"ou\y.C7W،`:a&\ߦE; R(+P_!:+YC,_G2ݏGJ8Lq*r* ⦴9?! j!L !%%\adl'+E(g5pWuq|r9O߭3&uh,mb\ʉ&R \nV&~:5INb(Ey>=`kNEJj}):0wOy(^ϔ7&=hK6vѻ@){Kpg͂ʌ2OAٻKa [g~Vws8Vhw& ]?-7ł&#ɭVr8?O t?֊9JB{8q!@;Xv_\׸޻_rk,wۼĮ.fqTwq*-K.ݜd_V'gHkp5r#2`9B+,،6+QQd2UNQHټ:^,B8zt@<8nJE*p`/`@/pߎq-@ʊWgP7O>ߪn kzm=k9AWW]BbjfWs?ҘK՛?W`^Cpg0eǠي&LZe987+Cά&SX *2f 0fgd1%P"0$WH%PV 0p!Mvfm ^eߦm,1|٦)Ă S{bkAdwc cݙt1p*4|$j]{ M,bKjacO+u}lB TW&)}[cFQss}3?d|c,WO\d,n2Θ/5>G|4nW#30 d@ȨQb*r0x60qa zP(5k>2z zw?Gu?ms4-{=c _J Sn~qpOR44؊(F]F  !IW&V_Q>18|Oe߸}df"_ o<1+VE]5~\R鋁 T4sO *_nB/1T %KM &\VnXyAE%q^*7R J3ZѫYﰴDPo&CWՠvө,'[}l!H?uuy=x2˩>kjplq<}eQJ+ǝ <={N+/E%߸]}:O# a F- (+BpskT 1M/h@ּ//ژ&F*\,Ÿq3M 'A\H :q|4 @_C-)O=&=\9;p0:"%bH\`.Sދ@(ᗈkD$y%R9(Zs`z"l(9FlD@ ȝ;' w@+ZXBRs#mY$9y. 6JQ?4ʉUq-P؏_!B7pN}춒{z1d6(E/*]W aaynm!un /bQi*%Uˊ+jP+K9;1Sz~ s$]EU"F y멨t‘R`Cg0IZWc[1:8`}Lh,8BPԍR࿊!L!o + VҒ3rVӰX#8#^m$~x_$u_D6'TWX>,C=Y`'0.))EYb% 4YFo+7Id!49/)I?z+^5Y?9/iõ}u guZcCy9-ќ4$ApC`` VIZ=(eD8ݘN08yr~QP\jc91eFΩ5"Zj1)5R |sT- Χ3)J1SnİKŕ-**0 G d(xEnWS4s,,/OU2SK[j5N۬"Ѭ0n׺'zi>Z?٧8%IOA{4X,7| 4Uyhy0lJ.??2|9sy˧X6LSahate5R\pn,ײT8ghURr쇢˱ߑ(x(,[(MO6Dgx\lx?$.~'|qE)4Tl"7ǹŠpsìrG.H [VQ& ᣂ/ʞ8/ \Sy(y<0yV풧ծT$RD!xҧkU)n|>5Q I#A "ޯx>=` Onfラj Rww(~n;,qflq[ɚj2M4 JԒW8Ȼ8NX%_nc#HeyUrtlo?O^|V\N|3+ р8_<|fU1&_X,!dHƘ噦%*l13PU|FP?\?lj7v<y;EMeTnunƘilۢв\q@e嘔V"A5%֮@R"Jȼ Ω. nK\<4*8(+#pwL`VT+`Q[#+!4ؒSj 8pâ\SZUWXZV^7*c`"ViGsTB IU*jݽŔjxj|{1:8AW뎾q1oѴ^a80㞛ňVg!bѷ ۭQr)5͕So F^zLw rJYDMsdok-OFHOUwe|\#>\X?I_9Eá5I-_hogA۽epۑwL,6C3ʄ& 1Q}yP'g`B1a|?c   'Gnq‚V4-iѸ4-c}+NqaLXitۉjI9WxYO,WNxN#+4UMC)pRAF2^hNKN@N 9l]z^]w= JEvhc^lS͈Y}z]5!xr|)qefV'Ӗ֝y 5Fɹ//:0@? ?ӽcTfЍڔȋD2H'qǞ'!0s~O? Y9!sBp ? 37N2U*b2'Ԓ"PUw͍H2Ѐ.Vm]JvlH3V$y&kPMђ JrT%3M5n4Ý%Z ѷatNatx[>(V4fYx&bJ7O]MH CN&2KouSu_z|\W? Ô[U=L\\J7ugFG}V,#!S! )֪ěS8w"7 4qA&X ǵ=Q86Ш`LXd_}^[eq jcfS-]nM FUwUOC*6ҮiY+OˎӲuwQ^ͨm#8tg:MҐHYEj̀HWbtƿwC"R]'m$5hXr3<o-KFŠ`u=80q35RݩZ?j~:tGyfm%Yo겂归65Lol^#C1OV:3HQ*2~:㡸ր!Ʌ O*p͜K|>+1_S/GYZ@(ꠉM?W&lgo 4l geϪ-(E1 ĵi `3i5ֆޤO3C!_F"2QCQ#ܐQV{>$ U/16U;[*%94zΈT[cy-/l*͝9  F@/xm'{-:A)BW-O5a `Zs3ЌI^, tZc^30Ѕzbd2C)MUoESK͹P SOY_a47uW9LZ\sio-qky1*FI'6kS ɫFa>MZ7$ױa:$:m@F֨|c%;^ ȝ g!+yؚ{M]_ڡn>v͍dU7IOkC@F6#6짬'c*rhm>q_|s3ы)>5 L`V>jֵk?g, }ͬ*5#?[{J/bQyM^rעzH=e=~ Z#GLĝE7?M#0n@#}m]'6jq զY-}jO_Rc_Mk-4{oY{ObԻDҪ}dZ 0D5֭QfY@Wh2;j4(81H;}t\mt\3P\!J>}W4PW&LnF]8'k iRKycYskY;;W{դq*@PÑ@\rZ0G))cQ tPr9Smbq}!^z_w _:}z]g Bm@- -$i!^e#ouZ vº'`.12^/h.#zIUEYO;7N6Ua'?Ȼbp ޭ+1;xuѼ[bHևMM8y7x ޭ+1;x"$8w߹)>U_~Ns֧z瓻CpnypF tqy;.7_}N/7߽5xqŢsS.Q)-lX.qEB߻a$#g}7ui|֪xʱW[=1k 6 vʣQCR,cJi>CP*yvRIgD9:3x4>rA0w/nۄͿ&_,Ě߇Nut}zp.e'}Eҩ[tNU *tY[!⢍?tïIe4ˋv]>s?ب.A+8?)_BD,MlEʠHI|CkhVdcn'u.HXVqXڱZ8?gݴM8`xqy "ޮɨ5)LG]\k)f lfSV#H9s>."| pdz9F} x-3BgYh1gY;HuNc=sVd,PڧU"i 4MiL3u; ʩT[CCQH#F;6_zC.8^P{[OoM\$ؖ 5Ҩ%̈޻-F,M h*5eJh-Š{D`zNur?eYA|.϶ k{p#j9Qi >4S&&Hȥr8 {<2z灌 ə.!ȖK{fN\ȣpNuy(3Ȝȡ4idf\Sn%xKH *9`i4mJ}}Ofmhn \A퇄p_"DЏ)+8*ٽU>b{rYZF[Y\;g 2HMfH TfP3p ݌6e[ϓ/o>*3cfk;/­qlQiuq qk 뗻O, %9:|N(S VnvCj.?ۢ(Y(&#[>]`e^6`/Wb.`|5}GI9ɅpH4%˭4@V b|ثQ]mC=7(m(ɭSSbY&HJ}ΐ)Ny]ͤ v7uyv'[UVܧ_Ei2kuS˯ºD΂"-#i®̔B㩁Pz%2TW/ʷIgs_CUe1jb$M]jD~NyB$GjG`/K}S,JOJRJ_jQXKRԥFssV :JA+$M]j ^̳Jg+UmizҗK`VzV*#+mK}#}dav鈌Hd}v6_߹.#Ts!Pv= m/~Мõ6Vl]Jsf)xQ-\yM`5p"4[ 'rLb ԉuD.#yQ${ k>ˑo:j#$oϺiŘcCx `t&^`.x=ocl2U<7BcG/'0F Sx6&:B {0{!`kT# X%Hx̒|V/Ƽa' . DL%[:m 3GM\(3g.atdGETg6FnНcʫ Fyz߯wm*qo^u-q 7/)p5WІ1?ί/hAR&WQF#ݗl~'sfoO>^ )ċm\u_xNQL0M@}rۋ?}wziR`O'Cb%~xwԐ{!7I#3 < pۆ34Tm^vU>oLi1DHr~0֓-̕<B鷯&O Qd8 '|} _6Ԟ\9+V"C9K 3SiEdJVbFV 6/L*xa8)#3'̫T#dj2TH2/SX:ҝ r`/ĸg`QJjZX=M+=0˧١ɦ$M"5#BNRʎs^+{o0uUPߙUxU 6/,3gZ <@1BpwJX7~Њ*@׶ͷ>=ɱƭTkrO_R^qįUhKpƄ/SޛDgOb)oQޏl㽺?7`FZ+D}U:7th> n!߃J } &Ƥz 뾓 1PƝFY# ?7`}14>kEK }@^ j_Dg~8Beq[8KV])03֝,9$即}=Tg8$eE&5ꩯ}{-mڛj JrMjM?Վ4F3XϱۧpvT}?+Ϯӛ>,MC{2 D rV0/|A_}ǗVk^ϧ/YJ0 YJ}!4ҐTIL&reRq$2YpOW3vTupyj Wm3*2=UR(E^lyN5e1h,0QZ% h2.qJh3z m1B&Fr/*I`MF/cg1J F{Fu(_1bG eHH_g|^20 õ5oDG ΰ bso#F\F.M(l)p7˚~Q( ׎*# usyTRa<5D3A)pNi)pdQx0xLPWQwӠxX|L ‹Np`Mc9S~I|7GR?<=?j4~"^zMx)>Rbrv=h`D Tp80-&A}y܂sWy.m?Oj;Af#^X&PF49j% ..S_kW6^8$W2͵~cL0®i85qFlgY0.{@:RѢ.+Pm[Qe A.PD0ĩ_.BrҜ%F9S|Lgx@:"$ Qֈ$oI!Z(fc(3(-LA(oS5W$XFȢF&SA 烊:q6GXP>g~Jo,c^y'%q۶X`Yk2o= kLN:n_w{\;xgS`"|qUDC1=kO{vUqO[{^3/t՜z1SP1"ӥh0m{/[mxmxlx)H>^.HqոAE;i0$^۔$u: ç +_ wۓnmmI “:u/"Ioz1g̎i&5JI_||v5ځ ~"n@&D!uD t^<&PQW2 ELv^#l,8Ct!.2Pp6Պis)9R( Jqtdq6>pVGlI!D$ǔ!sr TC]I~1@ZR5̭[e)F~;.oTKѸX9Z(9%?ET+F.VzV* yd]#S0P}oYϞNR=|daZ)MvXOv}/%U40{MM$8pτ$gȢšoJ[˘٢Ѵ6XakF 1>ao6jEja,~b]K2{^҈xZ}YSm!* /F-r %ۘ6|%a2REGHsYq'B&]9"wF($P-ɒ4y w2ibyē 8ӠR4$4jx4'bΪ蕅)0iEX?S@'+ 6ȍ&Rk[Q`3Yё-Mpm%iJ ٶ#^(D>bvܖߏݷEN GL_ |˧W |eK*%$uҨ9xڨKښ, e32d8{.4E7w}kvZΊ4O흎oѢ`S6N! gT3r >`+87LsZB8drT[] NDE,% w<|)%T72D%+U\P`mŒNN@hlr[zy)և$TxU%}myǙ+JSޞϟƸ.q>O5&#X]rhtLUKXIꆛ0=Pf ;A{Gl&bJm2Jh& nu+ϛ#_M^Imu&gfa:E>VqJqwq/*doE6[ۦEۇJ&/9msF=h! |b*8KR2IB0#T@J+-5dHaoEד|8pѯY:PN祋[l2^+KJ{MuBr[)jGHĐgs(@p/P0}~|`; {Иflu6i?O,xoi[=u=GADn? 1ǻşk %4ߵȔdʩȈSZt$#LSF@“aL$.pBOuNbW%1*+|}H-Ȉ_O\`B(GHáJ4ɺ.!|4a/6P~ ԁW^Tv/CcUh&U["bJoA,qچh/ "&f/(E'B&ѵ6r%AD_b%>M"@z+N v<N|PXIAx 7 Ä4Wˀ僮ĻiP)|jSZ]6I` ~7ߒJͯ;d\lPәJ Ql}0UYw?g+`UHpƎ%c뵬bx"26*R,׻cKF؈ѽ*c!_!sH P2Rs'~#:J霍@2*#kϬ 1M! z V*b - e~-Xt0VxK ouxsց|3Н_$J>9m]mNk)a;Ã%?]Ys7+zqq$.Fem9vb'g_V \eqD6)yf}꫺-:Rێ0bW旉2~A_5!z}n^sߥ{zU^|m=˟&_ΐsg V!%X$Vu_X-mJSa< I8z}b3-Mm%<|(s_n8~2KZ!:Ƚ%@zRpe8bspGWs}ىǠTAb%=`T @+e/tɂ@ /=˨$jo|M'URsDMc9Z x):Ui昴Y9jExa'6Kn8lb5pCtBvDcD%/6 B(E2!GXZqKj<_kɥq>n)b!/SvS;#H1 4i9f Yar1Zn7͗عd &-^UIé%fK2(1˓}W٨%%2asL9%x#<Y1w%Akwz4B.FyՃO3>Y9!cy8oe%k+pU (Hx53R>HN1 F22/Uܼ^MIф%o_+}pHh??&x],#i<.,*8g݌htjs*l?yp1G_|!.Ǭwgnź6Eܚ5+sdmIn@A>'jBMfJVi_A 2FJbbcD;gځ )]Yēi,JBze@{T ]{aٹAl<K-sG5$j4Tp~^4 ``yLq+Y>dsVIcҥ_F)Sfԝ%Af"f0"ͽ[ ٍmrǞ&u[Pp4לCV[ƚ~d` wxGY-Pt(GZ^C׭}a3`1@>]]_ǭk˻qX.c%GhO( ړcK.2Qkӊq8| ^_vPs$$;+7@Eʎ5(m[92K6Vj6 )`&V!z,Dq)uxQzq+eB"`\kCǶF5V1*UDUKgÿ >hp r'??mͷvqoME4+ZZ^k-p*Jj.:q1CQ(h08v e^R8AeSyB' >M;1%w19A|*A<4ēNM_KxD)&E .,p@Nz#1a&sWNY"+l#6Z8;uCl-2x X{oM'$V$J;oYWAs $1_6>7*HIq0r,Wv%BT,f"j&Jb%݁Uk^L]EZRyES^-n/"]r#1y, el]|X<6g?oM}d5n{9Ԏj3_\_V>\·k/#ˈeycIO% XS]KʸH|HõDy,ˉ9Xzn'ZYRϛX&A`&Wq3{b'V\O)Rk[ͩ{ɝJZϝʄhvﱋB@6 #נu0E+(SG[?u֏7u҅VkI\i鳩ݔ&!&Ayl Rm`QU ǃ#R6ML[́ bƏL(/dE$B2 Vx$ VC'DҌYtJY]SN(I j6w2zq}h6˼3v5Q'sJJ@_|82}_>.rQ!Zw@_~U(!`[ ǦQơ^z|NQ0h2l^ͮᇕ:yT0,{qr.sɀW??܄|82?f\Wfb?t)ßD݋qnM~[ܾnSϥNLZ#!Y0J{aaѦd"r'?M6 ()/+0F.Fs CJvPO0D@O1(I=PE%] q<)B GR!'k:ڧ4G}JzX՟wp\SpgxlU>mR^ec%jc3nҺ ضnQ'L[ŎGu4²{۴?4+W"q>n@@u+ EuJߑcݦEEYdDև|*Y:iY'lZF'$WC,<=t顐X=+LP{CV P:ekbN,< dDM5* T(]8 >Jk.JBh⮑Ss#xڡtg|Kx~i2I:[ =xClGpl`7/2&`YaX*&U- lo2Ԋ;RB=*#u]T 6 5KYFd3eeM62|98'2)&` JY7(%A'SZغV0ɩ^ThõHx,~@T jb ':Z~O89B{qa}S5B6ZiΙ ĉeTaS3n@R* uH&o2Ui7/}=Da1#X#0 n%z]8M;1F)oIq=-%eITbjŒSMcsak|}9b+qp5;98`/Zxwk8լ5ܶ| v(W<Ę 3GպG/!zs x&i 6-*E熃E1qQZ'5)oxv*ENzap4d{b! & K?Iёw| D""Ng׸d $ky}d&S>z R;= _`p2óȑ CR Ohx" tcޤz3JQsOY j=vS4AsjCs=0c$#[/#^kfeAhq^&aܥo1`($y>H ;O B\j/H̨[E0+0 .ۓϗfɗfY 廊/4ˑ\o2(_@,1ϴ.L0"Mh8?j6^4&-Gt nm Jq E:j 0UxʠrWn%6S ;٧Fk4E2daA'b%N*V Becove!%Èk)]F)]F)]F)]Ԟ\ z_38F}nCvh(f uf`Wq7ݫV?o_5!FLa~۬] :nlM2}@]^ׇ M\Y.KYaOމs.`WnsroXh{JZўٻHnWr"Q$;%- mz2c;vvujOO;")q~;tb2ȇZB&Bd)h<3eV҂"JpF6}cWoS}ӦD`֘|w1TIoJ1/I JEfQZeEnҩMU:۱ҁ&uGs3hQ,pjyc}"_ ^Yȯsݙ-$A9+RvӮ)f:zDR5c~ i>;xޡYU< aՙTx-,~9%G ;DX†?KR!j=͇[UD yi/E/> LLRgq)8#N^C*V\t?vpFwc8oIWn7k:S9pYQ}Ӧ592S }v{â~]CԘ1\dk rwxD{(f>J2A@J~L8F)ɴh%\]‡ ͼEF2`u0RA7B]%w<F c=/% 鱅`feB{[T@M|(NˌضwwTkwr.~P3 LzjjW%9rݡZf_C|?|x̢|zjoz:{ob_z?y}XO5dR@,:HZzmf,WOK?˱whi#P?(U)iWtJ=u$[_NoX /)Ѭ[&Äga:FMjjG]2LaR.LKEpȃMk0]ܴ+-~z2TP ƁY[Pe2HSlqϥ|omF_: d!/2FF:񤼋lɆm8Cx rȑyHL,G]A!=3'-l1RS) /UT1J+ ?I3[`Qa b'm Nrb2d݀CatN*WTfHRɜ9+:Uهʝsf(gΔXr{[g^FShھD\9yvI-M8Vh%Zg֦@o)wd9&wZڋPQ'h>63_fラv€no~_/} d, 8SK]S:@&b Tgv i—SJכ z #,&OM`h %4n@-ABd M(ǢЖQ7K !9:E䷊Ky:Eh{?&G1ɀ6N4.p?_u]wu}G0yO'^AX; w" oH $7n7c)"`26 ciH7Ekxi}5b7myS\"!nd;]9MKȬYYY"~%g-:㲢@LtEm] ծFajFs[gt=7Yh1oT˙UQߨ=cTߴꢥgZuy$`XTP߻hYk4aZ*Yh4aZZSm](LKoTߴ\@u}zR:TcEKZKхi);trZ.LKk9k ,v5NHiN-~*F.ܺkdF)чyL_TKp z|ųXn6I'x&n1)l2p2ֺ`OyD-ei!5 ,2 .@9S*7ބ}mYh5s]`_w4;o2j4T&2z$E:HIFOT[صߦ_jQVw_G,oE $S&|wNVdVm Qj!lUFȎ#SFt.ThȣLaE[pP`:[ƌwLW"U)Q*˪>/Raf.U4;|T|TpccY8OUڮ{\t uUjx/|)NZ#hx )>rP(D==3맟Ap(pM x(H`xѤ% ߃<bHޫ!F؝F,Š iO?gI2`JhH[PH40*D\8tТYȭ+CfGZa/ҊE뙄u3MR)r9x_QKɵQRv8?`.7?>mrhZsK)5Eid2K)BH3JMa|9JGR`lS~yX(naiK\ y|Ų+N5 "8I\M"bO%x% Hoq];tnDƓÉhҧ7DWԕɬPɝSeQi+MR ) T,eUJPYT,p%o Uk*qoTfbT M]nҙ8Q* RtTE8N:4XJ5U00i8w,wUZTWX%>TU@,LUʰ:*+|N`MuL,e) Om&O.ɲKQ ybBL򸽷sZ: ıs;E&s-RYfˊpUVTo7gT*E2>!GE9Yv`v39~u1ME5j})= ay3a6Ǐ^Rn$u|c_~L?~}GR&cIٕ {BB|{ҽOn1庰fY~}\--QOW-,٨?VΘ_+^pW櫫Mᄧ͛ۻ+?@YHe,0c.H}#>oc&DBfAL& BA@A@$_'Z v80clF Xj]/~Eʮ>_u߸7vzNysXewoz>׫^GhY@ԤGƫy2$/'k1Q%AX4>ڃR(8G,#k[Ua6q&`,Ƿ/6q骫>i6Z>Ԭ-%u-ؘF,Lڈ}CGWd w(N䬙,'? =cg:@paK`);6譾Ş#սݩF#7j\~09c̶ՄVR*oE\?4P?Aln^ɜV}_@5_Wc?}.MnW1}y˼|wr _7^]c#cW1^nX;5a\=-}Zk!|*W QҐ\EI1{7c݀[_NoXB'/SYŒm UNpu3Z<Q߈nS"*ͺ_Z!4+W:2U s_ ?w]o7W,nq=&")Ow݇} gXi,'^_rf$h$H$Ȯm~EU(U[, aF,QpG%V{z׼&Cuc,IWQPg.&o"V`*N_%utA"ƎMRYy,Wr@V^pjƀI.L!x*YB , I_G=/"Dh3=.u%z%"Z6E?V?š ~xq_l6mZ]vޑmI58cFPkkyimv.K(%?=큌6bdv[Ɯ1V3v\)fƝv@"X#\_|o/RZ:/׾>kzꞜ'r%b!? "1r##!,ph>z?0taJ&=t*J:ݻd]-?qrjeʕBf?Y%og:"بD/*n D -[ۄҎŧ_c]I!g! om Įi-{8O/bRT|O^}oF8laUJg}$y);cǭ{>dFqt{0Pj=~]wsP¯:()eavm4|*^ie7_Ңۉ5{ئ(8ؾv)nŖĸhkHgDIKRb{aV'v.J=vq&4߿$ux[;$8xSF)9ᤢenk-bvƻl$=0ḫo;t#Pz!MIlGh(iAwg,?}UBF' d:H,; *XE&I1y{?Q3ɲ0)ZoCg'y[eeui)n=:?7y$$$ϘLpeWr8UGzUG\**3SP̘)iVs\mv9 d0 .k@J Q#(e3-jݓ-ـszq}TR'X |0+~adHH:A4)%m8mI{-F8peI-l[̞KVr|.k])f: 3C0 Wo>mէ B98>ؿZE_kkR^CrwtT_h 1Yucӓ/ߘDIk5D>5wqŷĶ|rPi.^.ޔvRSs4QŽHJ$Čsk`pRKvǦQ'f} 9nzs׋97׳GdMkIԙ-e/B$ תh@&d%{ GԨ2JFap1J2XNdcbwoV0FpF5Zd5΋Ϡin5d@X5j> ecBeQofZPYBŨ}ŝ'o >#Qy) >1Ikgާ^Jt姀ENdZMr5"h"Vjkhjް݉lU2OY`ycArTUhs}_;G CoToZJBġޝ@x1H-4f~2hQwwU!YX]> _õq%Cm%*QjछcTcfve=+d8x 4[߁ h9\lbtbVʖl1 p#e#&CЫ7p`RL5eqA6)1(8EQiI1 wIRL wǤM+˷f@K3i)z5y$b$pVOqkTr\jB&bgAsv~UZ\]k8FPW6~[IӐ:>cW\6>+5hǽEK6ýĠ;8Zج/ZؗB=oayJvٟ 9r|A֟\%j1eOiهyC幙*}/hJ*4m D trdx(fllpՊ B`4ͥ4 : ljkhlCֳZ 5ݺѻ?Fj~zUN5uUWOX1HmC' %'ALY$':bor" Vntn$4p xvpƱylBkXu@sF-y=vF×|М} GR7~28x~fγ-,'_o?&{~&\}_.<'83)ڋ?=±'s}#̧/T"_okUjU +N-2ソ~i96>tv@hf^SJ/mlDqNFspJJ`uz+bo^dI$s:he*Lp`wj1iQpr+x9d$#)c&(`G!)nP-*W(T6M;6ibչ\ໂ+Y5m6KkN킹X1AxrKF›/]3,nG oݼJKN꾕"@E{ߜ7W^e YJDc&@2$ {<:mi!JW`x9 wǖZœ5'/.Z27Kfe@2,P|=_ j5?eoKOgΟ-f-VHǒ$! $Wtr+S~UH므z!naUPRi6BJ{ 'E "*V$/<:xvɏθlP-+3c&'2Ir^a:HT4s*pΞf`X_Z1hm\_ jA,JbFNS -oןtzEnlcɮלmժwXk|?ڣVܕh0p굶-F8qdg{ A0V?qsr> g_AFxRjb@68Ƌ+4["5ۧT BRDGd+eg̵(dZd3AV&PMkYz1LKI_~n/j~_Z*?~A&T 3)Auf8KI@y;z>rW1L Ytl-+qę$d U4<5!ڹO]&2$PFdUf]!ͥ' : 1_G -+Y`XuG~w aȵ0ӫ:ոȮz"{I ٝ$Zl+>xEKEv'pwq ntuha0Sdf,3ܮK=ly@L8}`5˫R1"8wL65jyu'H7eu몂\Wo0!1!ܪuی[ZT*-#B!rEV"/} T00nXY8DFwb^nhIQ q"+88ay$![(VVi 'h߭:A}6*L ^:,[ ܿٻ6d.龪a<[nzŁ0GÍ{UIqxÙ$p,UuXHɰԵn3RfМCNmì2ҧt%77ٗ$qi9?\iq 5q_*ayOzFhRgPOo&?׮2Cq>] Zt`ȏeqq(g 5o_I5#M2JV!4Bm>ϽK4hknXyHT.)]%8oCz(zӚ+6\[?2hW=8}htO=KXe&r6".E4$?eFT pZ7T'/#c*.iq7ac/JjS5GH{WF?ݺ*C8Jdat B}5|=wqPfN%sf4J#(Aߌo_uUVDV"v½ #G@'Di+Eä(2:LXLd3)m.}Ut uyBInZjF^{&^39q9hZC(`oɼd;h.zCoZ|>k>(7|[I$v=Cs K1loUimՙLBW;ßȾ] & \+&]. R "`zޟ6Uín +Y:J໇ti+ 󦞋.psdPcV1^~^ٽ?KUttTҩeU&af:ZmZF-.Ӽ K΍{z„\hN'w%  ]dw}`Q ~w"|/-͒Yӌ'`0LT9#OFTTIdP*"a KCp z/uF w V8$Z$KM!5T0CmT@ !61"IHQ r@DȪ-Aq޲cIۂQz(G`FCuQF:E\2M@ RPjH4>@P eTByĨA5$h47HybFRPv4 [6jDj Ij\H,9&@A r[0jOH*B~"FM`+VLYhɌD QiUQK>-mupXC)vBkV*" %`0ĢJTH |E9&k6x mmGTY.uks,C9=x:%!+tj W(hOf|+87M5p8DM{f!1U?D^ [qW^m7-*r $PfTG^nih ayWW83P,j<<TJGNPR)}LwjoT70Մ`Ghav\C>̵gDPWei˹p{i!Ҕ[u.& +]Ft =; ="VB~XȼdXy#Оj!*D;L8!-tK1f֮-F<-g.12e'b@uGtbbZ<55Ӻ5!!\Dcd_ÖuSD".щ}Gu0 Bc[y֭ y"Z_\kbxSW,UOA]t1 9* ^[ NuVJ-TUMaFhNoy&yNύ(j,>rY߾}{SbJgl:Aۉ  C}s9[g^lR "ZeeꥠCF_1C]e6-}Ns׫s*;ftbjfF.σ7.ȟ"Ywn+3_[h9rEVG |sO6f 9eQSZ"Txd43ey.Z Y/i7B>!stD(:Zi ϓ<l̒d4qRƓP=w#$#,5JYuJhMpOR*gL1^hl4P-d@`g3@*$K<˹)$ˍU)d4#T˜U{c!0h87Hx(F ix\/E^KUkul~{i9Z7^P}+^TaEׇ}+d4RQ ժV%$ )8{.a VyӤJo/}3]$ $zuQ 5vŵP N<@/뻫[Ek0&ʓ`5BiU(4 CAd9ÌILipUPO|QY^ YU(mvi OJAA\BJAIiI5.H K¨GJFe[8T+EJOZJR'džq?)-JӖR. ピR.ZrxKO[JIjfRT+EJOZJAI)H )'s/RzR=5tY%V;_0ǯ^Oў5 jU R5Y,p+l;ZW{DJwnCeIK`B~+W?Hˍ:|`TK]=S:sa E.}/2\9y)6Aᬥ^qPNi}FnVM5ۍJ໇_89\ psdPM(ﶠ 7Ç/)y@0f2įttt쾬{l2 0~&ʨ{VklDJs4!!A2#\CROn=qJb/:, "6i;fWzLR/-Fӑk_I+_c|cپ0O_HfD)5CȌj "sR֨IgkPB!5"NE",dYA})^pJOSֳܔ# QF}۳W,ER}[+-]cRBG/wӋy?Ӹ(oF8ʤ/oUY'mrVLCv7o}֯eO^dS[RS$Qr<; PH]a,Q n4>B׊Ӷki]sD>xvM&GK5|No;SBϓ |fZ->`I\UdH|$&ZA72|^uIG"E)Eʠ]{m1ϳKR2cr)tɡV ]{)RSAD|3jOSgy,(W;?n M7bǥع<3Ip)~\A)7ǥą5+r)6C}[ZtϛKOY}3R8AƟicN4X@JH鞾P +}k"ҴJ1 %8%' =v3`-e ۻxoxSH2n, DP %$&QPw8WeW Wz{\AĄ F-YO}*y"~\ã‡Dk8oBIWEmǾtOؿzPP/ɛ*irE*m:å `"jP%ߧD]'53xi0?,'VsoB^CV0eµ*qw`YID Q(*6"bzًp#. FOlC 3*W-܋$~`ܸChg+Zu7%@G//^m7Y: EG)ZhΏ6Zȍ[p2 5px>FϤF)bPeA3H15 i"5cl<9f&g2O6A@D#Jޞ6;цj!Zrcm6`Q\sjO0VBJ0h7Q )acØb;;FDia5X؈a)c~'IGIFH NH*i6SK.1tB+ȈkYA1זiűjGM@l]u>`(* 6+Ks\;-dFRSWr$eGi x$Bz@kʁY}֟hhXZ`L`=OwL ѻS$%UM=ُ̔҇7ZՓ܆n]\]rꋳdղ-1DaN #:TG,g?;9/!{ݑ`7o]$E]ƾ vs0H[)O{R#n=hŽӺR &mQJԞP3N\3XjKN8IuPG/;V ' 3xi1^04Sg>K3ʅfh X逡~t#D_$%ʏrL({]Eڣѧzc^ԼޘOuǡOCo:]xhj}O,Yj0Z:ԮAQ&A6z5P/:?6GH!OMBW6 @[2[yOu}p&CAr2GnntA>V8jDPcVG=,Wxw_`l;Vhzbg gZx8 PST)TmjJPғi1ȣ%;}08-tW??d M |7Y?=Dz!PMqoكct:VpEn͓zJ68/΢Ҟ<;rZ5;kk_CNKG> wMP>Z;yYB;5#JhAp$iU%ʘ$QE:R [Ik1d$ ej 5YA]–&oP $/ɰ{m1O&D2ǥRKsR5ǵs&Jo =w.Kq\9 .KsKϜKRPv\ *JQzҳRRBY;GoOCzC5R2YʓRN=PM!E%D3"JF*Qi¬N0qDbYʱD%IˆP\L$OIBOOlm3h]J3 ax 3 8?SOB[ҁYw;xJC_WN4Y<‘p| L"}NKqu1dBi,5U3Ft TUaP@|aԈ 延ï@~Z hP lUMt J9Ca}4(t(K]`5/{Sh0SiTF%AL$BĀR#`uِDQc&H[+Hck%>R 2%TIBhĨV{$sN8;I={NwRZo6u`J*2V!2X/P=޺܁Z]eit>AdөMz}Οw?9+gуFA,pU넅Z"NdNBssHPk%K٫_|>fT袯`@fιu9]ӁW8CF_ '[uōߙ)G.v}dnS,>;exd8ì^بO4BRx1Z0mJ AH. b& K)Π;KAQ<+U7Eg'yqgɋ9s*a}85ᇻI^NmctoLCD ͅ%Bi`mdBÒy~]W]{|7`y82,&2G[ɢm$[cHGύ J*)4msuboSc %?RF'\eDs'D_L (boZz^46B>].ۯ'M4[ ϼ4> (s{BO! ܌ݛpzgė3*B{BwBT}J?u1m/`g@1=)>5/Us;U{̺/O\`T2N(Ŝwwwwebs[#k1ј3A8իb[40+Vq,\P LEE޹-(Ewi ^oA0o)|u|Ѫ>'FS..~aU\t`LW|?#zGZáN^, !U{E‡lKŽg } h9(O"B!f6@`ދ1B .JAOHs7ۼee݂菙T\bQun[x~=q~~5O xӛt߃up_o)p_?|5Yob]pZ7M~{KKϟt<{?|ULp J_˟3\a0~7?t묕ةqgQSj2_o;,3}]%0thooZ-N67ȝI7 :@ w Ht/UovNf8i0 p[~v/ŝ5jˠܸLh ?2?-9i?uz4sD, ~Kp.5VT/&N'\-Bn̴'!M$SD+F"[x8fO]򣷽ĚϿl'0?l9 ]\;̖O#զMA՛fuC*_0*rw+{|gI3H{ӿ}ξ;P|b@b{FBFѴ}/sg?8I`~b g:d4χ<33-S6913/?؛<";/4JrZ_MӏS f@,@\qrq-}^/j?ؤ4.fm \p S3ֈ!~LXKjq\!D'0( l\^i=\uo2P6Sj {ѿX%,%'`|2@rCT) ~e@ZXg f"[_ĺ ɈI#O;[,yٳga^c'%'ߢ$[nuKV:1p&c]X:.?_:,΀?c{UAdLivu}7[uN׭ кq)jNjz 5N!Gonw\ srR_/ jRo>te2LPL@4HT2s;^f㲮v[Q$wh7Z{8a6Gi#ґ#뵔l9a9Bh=Bguf!MB}Qx(E%Hj"tZO\PRr-"= gOtӇ+#W}*Qiqm_ìe.w,}mF>Nଢ଼At.57b2ⷻ<Wwtu'59w?ޗw۾CrFIgWLiY|s6/[<3)i%GR2;imNO"Jczn̺hx!iŌ5|{-D@q9%jAŘiID32VC+<¤i'|)t"`=3[лdk!;[?h!N4.Zzm 4i! Q5BҥnY>75gH'"JB-i("Z8>~qCadõ?aKF0-D\R2,1_&j˗|veze)۷B?^d?^$M3w5Iǿέ"ˤ}fn 7^DnXwgG3 woa Ł0n5wa Xi9㤘KkҞ8%UC!sI\jC;1dڛE8c>D/wb'?U\ug;};YySF'r1 :i):=x]q(EdK:[W^7|mw[\(nZHV^rBsVZreA9kSkM^@2H8Vn:$iw->3P{[lFy"+aO&+>lsM/|M&PP0+k:/=`AXJoh}hWBl|>upV_l'I#{NuIJ zUjg8R&A-Žp-FZrGIg9lwuXD3\\-=ӅPDpb3`zΒ #>;[ӡen#r-)3kfZG";NiL놅S @lqZ,ݛ#|h%9'ˍ[sTһwS>v-%6?ܹݼAr!"w QzUtƢLLbIee)ZZB]]í]L-0W_/ ?JH:n k 3zHl<9l _(%z$zjW;} _B}=(An;Wzﬧn;ڭP.Vȥtd`sM<$BGZzDTb嵱tTŸ6+( IcDHJ% c2dE˃3`Z8 md""M5 Ҹc ڳjڰWj]?}n`OGRT;MsZ- ՒSۙGr1_ٿ6$TG5҄&[/H՞{9[ Fc]ab3xRA4'f +VPv"'<YX?_7 Ci*b!ܐj2dJÈ+anGu8G{ꄎ58$[]{]@|_~뛯{V E4LCu S:YLZ3%[ ښpD|w{r./xSoq"hU.Ka2 2׆ J}yօh ?_]$[?"*ʳY *"f!~9# ]^_N ~U] !Htl,?kY0>3cy{TXYbrgԏǁMO6OI QUNabAf^*|<,6HAX>~|q3K.䘨{NDHt8~<` 9ا+Y>.12xLaUUaMd+cN:RydvmøGL.Ǿ~u%B^ʒ]LϐU,h}.4Xbk~[ƚ:00M++p 2ŇyNUDΣ/-R &+]G^I}d \?/lhL5Px㉽TRL CjQ$kltjɡ[a`F%~ߚDV^AD!GcvI {f60A GK4$]V.ɐ160ǥc9a$?ys󯌾 s 2XT.y4 z+M0ih8Œ7p $@OXq,BA ڤ=Hef5&Dt|իgYy5X:pBAځ/bV2R-u(ʼtW:'/@e`%;ʯ %X:WڍaBj8LI{$7-!1CJql0ړ[>BKR=spݨ!C7WBK(21)M<3).yӝ6 gz1C~&Xb0@P)QuX9Bn'j!ĜsHWi7>ʤĿLtu9Bg9\|btmA 8mӽqǙh:e~BݗY9luN71 $Ca:9ǰD SJ^^޹ˑ vFx"' `#s9t?i®11F4s4J1bZzddQ6dwuC{n& T|>cA7\weNTg d10tQC%ciB@lFÖw소;~N8U#YԬפR;[GvQq5Ը EBܐU*? ߊKĪ% $]|L%Vh kɖɑDe͇w6t>iߙY"^p+@&9WDjXȱ-9_ČW13YԔSؗ%"q QY]PQص7CN +\uLܘ9L|Йu =EcD,;c5$LRuCzFyHKq+$KH:r7ҟQ.k^)1*\ nATCHZg^%@nt1QPxT6z{8AJ(,a#=f AJCZK V=P 7=RZ KPA]`)wL֦1k!MBNF@%˲Sr86E-:/Z-WUֺӞ6[zuԑvMǜSbASfBp)1b & N%x+KeMdf[;6ge ?<>j=B^_n45^v!^_fxsGޭUX ֦X4# *ARYyM)(voV6t@@%ayN"W#UoAZRe9G-]=q )G6΁PK0#B85#ֹ[N!{ɛ /\rJh ^M%ߑ45L@,Q?#QY񎾎ԥ#qj;RYIPW,EPrBeNI@( uזhAdmaJD7YEXg, і(u[s6D|:/>0]ŧVK~4R93J%kf$i^ӺS$?UX`}r]MCyc x &GFM߮ O.g=D{ ӧh_L;dŻ|4+Ax~2;iŅ.%JDA'c\-ށf߽Co+{zoJ }Wm1(/!tWɞvc0ϳ/>eL9䅪'o9@x I4P`yPj[$q$"^CZ ;3A Fy"j|cB=-ԁt-%ZioB$ݵU5h;?+d1^H1QwZF3$c +1O$.e]HJw5iMt@6FiEt>_cjDy=Ij$"WY{|ꡮl7hMZɵa)NYc" #Ů{6 `R!MFbc)6&Kr 0AiUWzBW..I_oaW"QIV$|di]<]wte{-NecnqJXp}R0[GH }R =I!]TIIdS.iBx›0xm_lFVcA(%`vDGj즮<;W"R 6AI&-5dwUA/z@y[㼧}^pu*'RPɽƘ=[5 K+ ;Do. 1VfDxk̾y}yz2嶟#4gu2#\e{w4LuakzC*=\/( N:%$:ESjhXC5PH*.%˴ځqNZДR@ˈ"{;{ R+ $N%Ǹ޾5zC:%b@Υ Ӿp2GP~'lH!ШCh$HхQW,F(N+]ԔJhZCW-ؾG5,#kv֏ x.FnOSʁJM 8hnM[X J11\Nc#O}%u'jCO1T>P0ǩf(DFf!&u**}M2&&\dMYEk{w?i̠Z{&(J0s֗o&Wf$ػҞ >VT` FZh4w޴M ldMVl?纣LNYJPr}^Lrϣ5;H ?8@D*HD4,t)AT#}GQ;oJ L*ρ#IB11h"MeȦ pnt4yk/vW*9 G0Z&$O az6O.\,BmH#DC9P&&˄2bz&yO`vּS/\p8D9e>[n: t2+7f3 MƁ:q·T@;kКvS>N͐s!DC (,:h2n2 N~9x~)?bf~1IϔZ!\f n][$LB~0Jf!f?R$a` QZ!rim;Dz&2R3(쬁諬b6?y\gIPUxd ЊRnYtM?JƏТ;-wudB.xPMz͓kLa.QJFRQV^_F@ĉ5OQ+0QV-D()بMAaZO`8%E(8b*FK{N%{Tڥx$G !; 3)m6wko !:B.#%c̳J/S93@.CCJ,?+ʼnkܶV51]n-SWxԀH jӜ Xp8,AgIē&NIxϺ|dK *ˀ?-J8?te>um'h{'Lz/dU6^#w%SW'=dE;?Hrfy_ܧ uTʿ/e>t&pϟ)D xD2(R4qQ6r42`wi^J ?l]L7?_|L٪_"ٰP W. f?󍋚^|e"fR<re|9ol=e~1*hvkg7q0-SɅI,3"ykwcY،җ Dv |1~ y{ye5%;⢗TQ2eWG4Z?/vN"^?e9LR56&Y!{o:84)DiP x' 8ayCс<6F&i,Tc GHi|֋m%e91I3T5:IjnPw6>Zf{WʻXH9B0%4F![oܳi2_e5"|*%Ta6Î~b7lGW2eWFJJ(5yڌ$j'ΣP!"'}^50Z6#E@ui Gf08 *mFs F3P R3R՜3H;=licC+X0)@zZ1Z" ZNF/@B.镖 ZSF*tF(hXM"mv%r-Z6$T905, 7:`bʚȑ_dž+cge:bab:6i7&pZ(Q& RGI$EYE=rmSC^@ֹ SrgK<?I교YـTuY٩hwڠ_08jPyO \gu-v%xZc95SuGYA1!ubZiw9(P-#ߝ:Ki!m>yOpl5ltƞ=/7\\\_6 UJxa& 97H͍єwU_ Wx^83 AD2ǤJC6lڡвO-%[;6*q7b(.caYQ!B_ҬsnSzCȀU_|Z .nM}Hdz5Sh~+} E֫dmY^Mj$wa-o~m :,jսY[Rm:!9Y.CoNAn>T_GA-4H;nR0CC+K(e)d^{V$HJdq%o+`#(ejr6bBLh-^0I \hA ziep|trȻc_0:LG'Cd]BB_/[T l+2ϊGg5 ]dA !2J"~db4,IgFEvT YlneCv@IU[fT|BfB sfL~T!YQNآ-@꫿tr9C?fynAirX2]| g9ВעkM$E@gɭFSqF!ESO28CbK)5"9w MC?k4͙ %iIkv.f:l,t,K1l:n#"#\ y[Jɣxw+A`(12-g.b=z%(tu6ZYsgb Z&:Uǂu]]fX<` g[QK"V2"s=R(4 4}b Heh9ZǾD׼z A42;ny#T#`OReEj%N6;-2WȰWI1-R$kxb$ڕOD=PFޞmMYj6 >lwvpP-m(7clRH19K怌737gz;*@.JclEèY8a-hb$-?E8v/Eǡ&8%wH./p> 5CR/F1e ĨFRP%NREInw2T'!K{7wu[3mJmy=9UUAЍ& 9iW#*"7k j\>D?>,iֆ0S}+eO)'{1]cnx R`щͮn@!29*̋˄8qr! /*>ScߖdM)Xvw)P\8@I1t>" ;H  z=U%"XMZRD4+,R9[$ZHzd..klxJ  zb-`N WhFGOb2Jq, /(k>Ɏ[RCd9과j6}wI[k6%_@&v=crn =]b́B/ mL SSǴet֊t հH䨑jcZ8j#R`*ժ/nwgc!?DU1lEv?#wG f{]b|mT躿lsz{ ~6rkT4#@ZaZ ;v`8O-5T9&S{ '=G#\l}[u@T15 vO%36 Y8Ӧ{9o>̶64˾Vj7g(sN~ΖuC@k-եߛR- `?CJ5lU84ބA$ uf!`]2Z]ɈÍ uD7/А,Vl{+j+RpS-L⧬XN0ǨWن)plwvU'#jMQ@.V]5صw?Z'%x@i[{]CSq>91bb!2V8{A7v!Q́sh9Hp*}6AnW]{dН$,u[$$025BS:'/@e`SC!Nx $O N7`7警ziB$wr׫nBMM=qQ ׵ ]TVzprr AB.`D(y6#$nw4سq|=2W(~km:/);J@X.pyDDC>%8?Pr?8ֳ}Nm6ZĤ`4Ȝ癙LqUiC{o À̤s\mU[E==1T1BR1 PX ڨڣf:+뼋CU[PLY}h]3,n_l)G%oq;j'8u=_A"hsHqNS7YQIK(y0FYH<1rk2ͽoAN9NL^a5ʁ?JA!;=|u]:#Pa/z|,BpjbԸܭًon=oצ ئ6'` '_jBN+)-&zqiЄ-c Gg&5tY3cӜH'D`#$Wk/C?P-XOZ8ySmrD`NFd9dƅ#l=Sh4TXaz5fLʶdI嬹w]CMlH* V32eA{`$Mr9p0InxuXhgg,ɅhY18O56F[0|9;9b-y f }r / \Ad%ߍnq.cp8zd4-+J>cj<0!/g}u W픷v)+Oj1xR$nBّ5rA2 " "[c)HV7L7OZPCCq;EA~-gGEDQ%o4zA ˤta$dLE(3߷NܩdVT B kn\Ѐ\7c/?ݎczdBNdOt?O_m翎]>M˚iT,Fi!,>-_L~ѸD?~Y|ۘ/ۆz}IB %qŸ%_v.&@.^O.f~IeWƴ/tcYMsF7NMu^Lx=[2B(/??W~|wl^w7/ y% OmO_)t||Mg}?ϗO5ە؞Z)&%hE4hƻoEM36;6|3RllmBcE,\/_m6jp@hH/j=\6|RoFn?!RdSd&Â5(1Zr l&{ HG'h'jr'+TZ i5q}wYI{ݣwF7.),Z a҈0Mޕ6r$"i LIɢDQbE [dX_Dd\Yq!(JvYoܲclyx~nF`[k}[3G9\ {|?y4!xm=d9juq)AX㤾|~'R*2:Rl˻'gûaLw5^7[pfS҇SSJPQQ%^Z& iQB=S,:~P(h΁<ќCPFIKJp_ Z=5yQd,ޜ}(9R{I!`<3ʐ1H9)^FN$'o);R-O^]VM#$yX IhfypFzrsGGk:@}f"R3i])Xz&96Qܾk t(Q75t=LJ*3a3g`lm~pu*kbvN娜7!P:%{f:FW6:Q6y!%)3B0(jd]kPސ߫lv;@vثX !mMLOJlOp\<+ۢR!'tθX0jZLzޏ-I :͕lT9Z;oIwz#d1$?F8^gGlb˳סh1jC. ׃ܟ _c+j;< 6Y5 6s.xDfĵ`cte0` 8D`ƦEȮgtDZx4ODMB{a KZ+Jns4@qth@⠅7`'/S֗9l3Z;~["rq g]cToJ1x(6NK#yk jR9+L)yeUlήuXA}6 PƤGś ڑIv)r+N*虹Qh)x.&FoeiR,ɑ7I Ѷ n{׈R; QzEhsHj_+$ rƾ%ٍ};c3-1¡?\Z?7b"gݨT)reNG@zdQ2Tqw s V?#w,F%c>kx(a5_P>vʀuKUQ+A l92) ̋ȉ3g7+{_fY&4 aBhk쌁zdeIkl6Y ݅i$hQѡJgkɓc!gX6G7 pnj"#K[V2_6r\ZV0}cǍ)ZA E1 5 87]ƁKb) ,>%"m7+_^[{6|YWBD|IN-N~n|mJczIf{jו#ЌC' })$EvNrdyLgAjgJƄ6IMVY|Db@ u9q(͉}ߞ;]J\ye2[㯓+x2ۣ-xd.\ױ!G{du8ڪUἧ6vCP!m>˱|rg+DeY.S"ܟ`JX.r_VAJ,F%56F%әviroCdh{E&!<~ a['w!?ߟwz)s BTJv6oLe09hPqӥ!tI4Fuoٚ؝ oഢm+m/6Nv] x&0ֲybTiZd{ x.Qh~ldš1bHl8ݟ_s }c@3'2 hV~ш&yPtu0uGKe!lH8rTKm<yGcׁ=to4I/T[ۙ£ILz;ڎ]qf_c^sK:IQʃlU3Ob>S ,H>&ٴvr탑bsF+WZjz[Yۛgd`̓_n#=pF~6[`:zBKnz6~w'-]]n>8- |/ZR[j >f^ʜ䞧lB$ߦ>bzQB]}"X+tJ8ky≪nͣKf9ˏ2<kFZC>яꕻe>#T'ø.[U/,≾l.W) -gKӘ}*']< .? ]ژ?LI *଄-]cOˇxq i 9sj )nSNk÷t 0t#HvŽ7|cv3`UjU3Y>'aD֥AOw3ݘ7 #/;#zEP$H>RQC4YjzJ19ͳ$V +ۍ#G"}jc .w\ e9UA9D^1Hiýwz)/{p9wO͕w|8qJ"rpnnOpXX X:VDV/~yuewe#cYF[Rj¦=-t{_8[5NI\ґ)2jPCE6ķ?>Vj̢؀-<#!RY:E,{uH },n #"h *eABB;7>TʹF<2O>0/qؘ|J X I% ި~tʋQ\p2K&(ٵvgAPaǨ0ڞay%!IQ&5y#QF'b *eIS|,X|`_X[,uVl|G83iL5X5$4l聣c1 B,"zUurTȪJCl㺄A{,Ƿ?tmt _'?DXd&BH/?|mrޣ݄ۻK֦px%ܾcޭ/=7߶13VQ{jA&.Կc]z+n/ӟ闀ϲok^ED)ddbUB2e "sU ñH9Boۧc RʿMOwRJq;]e@SdXzs'77%*Γ4F[etZu# (kW@?ȹzcw ~}bg~#j.jn͂  |m{&LŔD n>蛛')5d%Cӂ(+OdzEd__[,(G2V'R#'5liPZUX`0Y1n>owC޾ޖiL,47lJOVa1;aM*SF,7qo~u8:YEq8Ӹ4y0<ןsvp{L{Mop}WW_[εmR6eokdv}rvR|Z*Ad_:|(_R9eE@sxvߟwϓ~4н~<=.oXeuMw MEuBg{ Gv^qIΓp#޸$O"ڿp!}9km9ժHYԨtU+&{iA5Q !ꑊ:jijgGyFջ&̼UͮVRr5:]>蔱9W.Uimp`ۚM))1X5m":j8v]R8b-$}w{_b9K*HиYF5pxÉ*!'QU\s)mP\^q.-BHAVKh)O$|ˤJ-%q.B 9@M3oEI5/rynQN"8|{5|6:BFgHJ#nLg]f0xp>dЬs6"kۦZ/w (mi˳T}4裰03nT` TW'O#b"%(U-9dG-o31~@)*Xz#ۍs4pԯ%dH q YI9G/z"; 5j>&W+ c`&S.V249_s[aag).ٖÜH:婦ͫ[uv7V C~v\>C|~w{?7B!DF}qZv^{7J>n_Hỵhn\KIr9e#40gְwQ/evvqk2~,Auy$uuʳۺ,Wh=q`0H/8l1$}58A#;E#zn'pנ""œCkԋ#'(yT_yRG ƯWClGYidHZP&Қ(K6Cڱ= |L CJjԘSdF3p=x63Owwx:#AbQ=r@j j IMk+"a))SDP횆aMcL \sMRJ!$P %hvұ@gY*>g'2_O%ܥZ0h?vǧF0YNyaMH:M+,ReBFv#1V4{ i>A{2.]@Tface<}-ʢW/E'ϰ2N[x`A׫8/?!v5?!]9t9ΎF7NS]@aVhx7kh.*RePAI<-.l:V@j(ggz+VzaWty R-1q6Ĝʁ$;{YpB(G4gp$!rZt"R=EU߂XQU.Ȯgz@3ľV|DRE {I{w+/Ze`:((A0l8@'W)m/zzuCG!*/v8y-$q]Zx<^i8e{ HP 'IW/Y+VE08Yh xAA%bM)wk>C$=y8ހ[8Tf;ŮU8s\>%W.29z5%+ZԮ8*Z{ޱz (Slp>a*O^;x:Z˟p\3 ܏_ x~9<1oyvy^K3]{xID(3.ݠ&(=Yf V}n+=u ڃCth4;îNYtm8~ _FA**ʼn^e8kNZS/=QJ+" (SU:c*Ҥ*('GX7L7]CWwv]r+.Pݶg;Mux~5f~ ɺ56/:yUK >AпCEm_ M*d0M['; m 1lQ(1KD1 J}fw<4ˇB&ON3\a<CnnDO$Xi qO8$ G&%O),uƲڟ<:PxYߕwYڔՇ_/=M eQIɲ/2:CT Q1{kSQנrXFc'lb:`w -_ۇ6z"xx vEHqXPF\R+&ZQ:n~M.25NNK]}eN;GNh,YW_fczy.s1-W77Kba JD\|{U<>^G?.w ; g؞{P۽ܘ1"bmt L\:| %nD@[[9 ǒ{cU ިGN"iVs;gBe/7k qrMz 4m֎߄AiovS{C5C3\so.Ge>{~O߂ym.6 s7BBnD}&oP)*Rv9~ѾEko8`C|Qc}ǁ~iQN Nh19-d_#lT(/#o?n_N?G+iqcQnJm)Xʹ;hJ"N7g-d4R ה{Ŵ$cw0ktKq0#2a΄{pZK痮-MVB{ .;| {9,` r4$r}bGjJ1jGgĸ' ǡ|valƑ1~2xxZՎH?[5Q*ф'q)J#VơL&ceD6sG(-Y5!kF) Tmi \biyd4\̦UϘ0cO+ELlE+ĪH,BƒAm!Pr;ItbmL!1P?l"/J^46@0ŋ2 Xcڤs0۶@IA$>{AJv${ͱ˜)rfl:6+΅;qȗ|*4gFXZd*o- Db쪆mh1 B$vC/ƽA"&bbj`)rutJg(s`h:6h|{[EObs3 \r x ЋFzaRv2nfy9\F.p,YADbrҿU:E=7+! .xâw>N_XþOWoǸs0߱Czdkbh,YhBwJ1SWcZ|{T0*KK0] hxR=67>AIY ySD% Ipz!G]@L٪ι$|,be Uq"MFAL2EcymKXӓr̾dTer'6M0#ܤN6X5ipK=8Žrj0۸BF+*Vm k[E% Yjc₂$qMnT;z1ZqU $>Fv6@!:8_nFtCdDHCG6#ѥ$ElKȳ)}&( 0?[j]z(]z}m̷̳a@3}>Ck1sǚL|z:Y`\)VWz! ƍn` $Eï}Gc R{>J(dSNYAaͬ( !}1XАլv3J@,>C+%`V}ɶR\!Zɇmr+l ;cI1TYQrcuh^ӭ~rcu|_ud1_V#I*;pXi>oO3~2ׁhK"[`^ .:n7rhmO5gرޕcUb ֐C1gM6kpI“lƩ 㨛| y#,Sgg[uS0?Nӝ G袩ГțOz]{__UjdḾQ!l|6H#_>S j#lAa2ZQ YQɟ[)* o1{K.CQ Si=A$UEm QM%\/AyT+Qvxqz\ߕ/'ǧ\ke?$Zۤqs@tbJ*4 ]^Д.7+^NA4 c AsϺJRf}ஂEEd]%3lŝ P'HɆEOk/htc(#妅Z;//{`-͓ 4DbZ!,t"qKe&kN(epƣhJ{̡ >D- U v5$R/ޟX sP`zZ6naB{Vc/{`^ϔ(6MzSXBKF5:i&sL$<_{iȅr1xmHM(Nˇuk Ga al3}t#U? Zn5l4i(V1E0;i6#'#!7D>,{`B\o` H\GLzJ+9M,Xڶޭ?W-pl5-QL B2vo3\ @RɁɐdžHāzDIKm 7ٍahtwwvx!wo{$g+Yd wbTF30}HY6\M r"(4Cٞ0\kj,;",rNWn6#1aOB}w"j'i$@/y8,{:X /Fv?c+9s=>~s~^V2DEI/ 03Y#\SԳ^>>JʊsȐ'k%iQN0fc B=K$a1$sfc<`NЛZ?b~+VnN>vq@˭|?:]nKgE?qqnf>u&5Ivõbɦ2zcX8 iF]{^QpdTJHlh>GUO>3N̗ɘble UQqDP"aZ{1|: ~@~/Y{s|Aߠ z|t|M;plOYDO]8 qFy27\AkObk$l9F (Jv!ikJqy&Z.X=ɱ|>8=~G+:;J 5׷SG~~vc?r8>*}:O J~^X k5+֐9]? v?h `bψn5TeM" E+VjߣE1>YlAWGÝ'fa~!t򴿥+oJX?.c]C.sV(u]]]G!7-E]@t$'qR!~kʢ쩧ixlUիnWS[=n9eeME θХ:K.6r\6oW S+8A\zr~-(j\b? *J R>:0$6ቷvkjWvkjWӚvG Vq=s`s" ClKN0 lw:ĥvm}CHx$ÔD +K=U(OD1ifYkoumuU+@U0:XRkV,e ROG1vw䯲eT @q2)'GǏKitǹaZKIي6 3ڟ#idZ9& neXs/aa_I~9l'S٭bJ˕}t2d\frPzbS6'6ΣgdD3Oʸ|s<pτm(x(&J c(,*hIQFZ WVl> f ԨUX ٛ16&ꉤ{C}b3fظE4xr|ԗ)*sKsm,Qk3U0_j1h>ypb5>>*75uv[S xk޲iEbd޵q$Biؑ]`rvUM2)ɶ&)iċ83=C: DdW*;}~F:~!Pb:&mS%8kӱxWg䎯UNW9_t|UOǯ+T|?.QU.lz}O?T:ZaX)Ӻ3ؽ0 E?J.=^EDlU==^Y!@:sez>>N {+4KWPS*+J&GO E,O&.w'd--`ПɠG!D0O5$Lĵ1E GוZD/!0a&"1 Tnpsc#aJ)% /`ՊOQyra !lFNQ.)EF6KTc1LAߚoec0q9Ȣb([@Oٞ٧xu9ɷk??ȆQWg7i=}ƽrP lG~8NO:Ҍrޠ'sh=6B 1Wۨ,,VSI=vRxc\ψ4uo5Sd^բrO[O[=!hU:j*;Zٷ_bvB9A&84WIZw˃-1s>W?~_rǧTJ"нQn(į_1|X!wOb:Nbǻgoݗ)7a - ]6("k+E;zz}$א䒄۞6VOͫOrHT~ >w盶공g 8:V˰֧V[D( *1{% @eUtxG8a mu3:St;}j1<)iАXAZ L }dDA" 6 {@dgxU5=[ꮇK5nb!EDS!wR ? \Z:znUzDtA\5I0;I$x`ʼe4EjU*fR=l`u[]OL9d̗A>ac+!/i"[G14xT%XrOe-ȾquR\`PY0yɗif+t-[jHQt=J2& c%ʱ);>g=y% .]L'o_O[&.'ʦ"bbR v'j\01*,(i"p`2O ]"SAP*1 =u_.ؔ>G?=PEƅN[͵h ZyI\H^pۈd\KX;O܁x+7Z>TrZ5l?jVz#Owz4eC6?|IuRpLI.%AP.,Qުhq=V%"2 qnplK/e Zq/po#L&BTN G-x`<0L G@Flp&Gm kFˈUx0D@#<[)WUK6ŕ> ,kE vlksBzhsÎglPk] lE ֲkcC_c̩:5~{c5v)I9-kHv_;vOk1gS&p*~k1j3ϳNad:(< zf2&0VL&C^RlNz:5Y/֮Iuy6Jp2HC)l>_߯eeioGudC L Kڣ Rkg{VG>Cq'&pkzR a 8wm}/M:ilUڛTr#n8p?4x~5E~0ba{5O/Ƹ輡5Rڛ d>YdtW 1w2Ã|i VVPtno}^N@v5:9˦\N=t}'(3_ u9'N@eMñn [x$ߪ4C8ETׅ{l#F.V#rFYNq6Cr-V~vQs/ ًl~:V b ]Ȳ%(Г@/̞i w6G vZ ];fOgIt4(ME9e='FdE2^'m=jCCJt3s0d~.@EˊzbDqa/)y}oNeT"{gW%%;-gs[g>U7hZ oVIrRR2OsgYRp!f;eWT ݟcDޯ`??!bHZ#YD&hgd1XWU\gs;9WWYy\(,7I4-N?~3o1oN4_ ٩J٨i%9 wmI_!e;TA8Χ~Xt$e;{IJĞ,1$hWU]UFXȘPPK׃=|og9_Hob8E+'hP =Fc7e1s# k aKL3uFUL8[qSr>Wj},ҴTŖ_$;^S>$ ~}DtB\ 59S[rq}zQW~hq̽[fd'p 4aͭ-Jf[ٜ qeJ®(sx=ߠpC{9;_da/a`OuLizɇד>T)P6,^FtwFRIf|\L.h%z-mQִɖ yl 2&r:)YowZڠ=@^4(?\K͟<b5Zwwcs ^4Z:uj %l@GKpw:8*WGe+"JolR\}}LIa_F3Ke2"5Q Jƨn%+/GYss\lT"9T2@-{qGi&)5_⭕"z~fxqʾ]vKN,tqZn.\1c{4FiI!1K- i54l٧x3̆|%;TL:\_AT1y {عOEQ@|qHEQAHy뀸.]"Q3k^fg=/.9󡹈J'[i.Y\ŵhN<I)c!XJa#̙k!:oPQ ;^rJpt*^$˒:ύ<( EN;F 1O0rr`A8nuLƒ%^+x%@RoCӀ+$ѰL1Gc諵U2'"g$ 1] xl&o;k'K>s鑶'Nj%O 5zㅝkI.|N"sc"Ӿmtܭ4R%pku;*d$k+ִO>(Ε0"VB龷u4f4(۽jbi8J ZpW&OQT`#9۸c~s40ոsD$.tg9%-("`̀+B񷦸d0d4ʁ1q(."KÅ#1XGQ"*L3w-@XՋ-yn2\ õ4wao^>pőʩµe<~y) ‹; ?:|ۛeI5"IG/ Fvu0D"h.'S# :5?Lؾ01t~GߤRHk[xT3@ N!JOJc_Zi.#\T.JMޥ΢](BB[ ,qcܽ7{FK5*iea(bxНeN}ݼD .2&9 qM;2VW4= M_YEʉA= gd?=pj+l$Vu@-4k4_Fae Zsׂ9Ee7ɅsyL ׂnu<% ģ:mamXVae%ӁRzkbPhmiBJ Bt&8#y5gg9}9gƈV3K$sμI9]Nٌ) "7?ÒdӹjH699'Yu|7mLS a9d ラ)5 2O ?ẳfmё:,u֑4C&>J D 14;Y50C{ i]h: 2EQ ']v*k`"Q̥TH O.l 05G=LKu^{+k6m;OܷC- m LtYh󸙃F˖2TƂp 5菮L*SVXSo"z{2+PT]vtN;0-\ *@9!( 2j ĔDH-R {(-SϦ&PmkҰ]=0&=~y=i-rN"ĩbPgD])ӕ8:divFLU>+*U(.,kF[ԃG {嬟mpGE[oF <=w/e}`Z+ (S5L(]$A\ɭQW2ZVjW޻~`Vr#ɛ?끩O|Po.5#xKixy%ٌ/K%5^"8rV>3ty+ o 4XMsvʤZ!ՆvZv,Z`Ӳk2𤛁 !{[{tE jt7~T9*}w j 2,~?csOJo.nW(}rllLGt^3V}V YwscjGWrĥcahtˁzw\~7;bqm_o9/i&tջ= >. ̗]ESSAthū.( iNVqPP]ԧg_bן>O|m1Nlܓ@@&`˓MF`6. `..tם&Oa\s6!퐴ڔ Em=^_zl2 gO=MgAڑli4sI~SkME&`b [ |!1F˜'нμh`}sj RhdM/2F,{EH4.9(tHkxEX Vwu~/vƱ#}Ί*Vkm2Ж2O7%r(^:0E)vdk4rوvÓz Cj}E:2 ,3HF[(Fwo)z&K yIB|NQ_=XIfϠS`/Tj쐇!MSre%g&f'ɲ0&d&I/0FQymC߫4{|!`Mh%!7h=c\Zfku~}=. !72묧VMMO[ZuKˬCm,8>Ʋ026WƤ~2:iLZK`/Z79iYe::}i%>,֞j~nuM6(V)N3Zea. %kRrLD#xi99#EԊ|zFۭ;\qȣ-308#kLmjuv_T;n J;-p3!4 :N?a1SFByޗ9]h=bmqrF%ܚ@:]">"G;9aGIxDe*q\ KKEx邶JţؒE 682qIwuqEU꯷w]b@it45ݭl&iߞƹs2D [.4.̘\"Y֢$ ゎʙqe[t ޗJA+m!S_OhGM)cuwBm*^OʒmVtnT&jL@JCliYJqBeߏ{A7l^HB<Ӹ.0z8xEF'ĵ$njh],uJks8 @Yx.FKIBҍ[v`ӫŊd~/|kZP[L D%¸Ob'CP`z`II]BK(KAH(RBD Ü+L $&+rްbN (1h:* Vdbd7~T8't[PfVS9F|fu(-x `π&ӞsYߘ;|a뵺MZ h?T?R%hb=Fu38V$Ło܆P9{ҳn)]X 7f hO'-Nn^CdV UB'MñJQ95lF<"cŘq*# dk&ܝlw0\rV8C@N6ʲ! &lJ~Ɓ |r:( ? ?MˈteG1#|ʹq[Ǿ^M7^ΜU|_.5`z;0QށS/&u<5s1!Jԇ>̇q<5Nj j·xb$b0\~M d8um2T?">DhQއ&zc+~A6&# {d@dTqJqJҘl?DYF&W҈HP%)B:i&a=n\~ V#n*zVkDFR>*˙Vy9orQs09"GHf0mfQiB4~^|D]umVn3,\ߏ~^]SmkDKXAIu"-B{'m&zk3Ibm2)Q|rgk߆Di х4eT:7MK m Ey."gELI#ldk̎հq4J0AE@d DK40\`]i;H4I 4|QT#izֆ\Fw,=Hx5yL&%1.,S|xNia@ }+ ։`ljlR۱Oʻ+EՐmL=5H&O2N;h;hӛBpaf0!Z:Cϝ{i,9:q%<ayM& Z1Ն)T3[}5(yjѰ892\'1ˑM&dRV;PEj"[u` T""fghTn=peihvU!h/7oƠnUq|К1mC%FwT]7~^~ Cp2n[fFOyEՁo+܀__fa_DYu=t!/\E蔦p:źQFºA}GuPm˔`֭=luBC^)#BNncnmy:mߑb&ZkY֭\HօpmSY\TrOs]NDt<ȡ0бʥTn cI*3Y^v4ʽQ3QN+Jl{ iWohP_} d|6Os[9Ov+eB2r_kb%Ù'T "$((0j n8G{sH$(%qPȐ+PM{(T9R!kSnjŹĵ"&x]; 1D 4]q+`<@S-C FI5p# g%^rfKYXj/CF.Cw-3ul3&Jgym A4LgZ?7o6Zub_;1&af]U(yUpWᯅD5(l9E3 [<%<.@/ J7?~ϧ]Z¶HV>B$]\l#0H]¶HDmfܡwZQH0'M-zC֡NѵrBr~WZẀƫCi-M#Lg b:wzٴ5g7W+, qϸ$ҏd x<)llm!,;yUjw4%p!'+-`o?ߴўw$+kyK1J˜0qFnpC y1G?󞹻!˵;M(Z}#[A̾wq6񽳛_͌@12;Z|\EEHTz¹2%wބɺ Jt8lhEMwh 6"f~kj1&<6gs,zw7$Ũ<\v>g2cgz C1v91R(R8W>7k9͘im8%`YFdUeH6y=4QIUdʁʾ/I'/F.bPĞ[Cxxև#=AM1z:֑aUT^Ӑz] 5XfI { D1-nϘ?$ra$tť DJW#:Ox7>"e&5갆iS1FLdDTyi[BY)ΤBP[NR<|B4QKjpLiA]@e P !˅7 0ip͒0/,ZE2GM(piy iL4isM`+|Ѵff Ik8eӄmnR-f[9x:L `E:gL r9R"#~ 28{vhd҉Ʉ~)7:L&hdCf󣠐MrleYEapnK(l, 01 ةoK1/ *Nf:Sk.9\s @'75,a3UQA`IpwvGAҐYa Oٙ}}q$)P#E|cLiHG3vx)Fyv>|RƬ4'WY*Rbezvy{h6խCZ(^zW"_,Zp&nـ|7 O4I:I *Z 5kslfF#y-SI[t^}bYhsSBsԇW~⧋II <h >g઼z&"% =nHpza %b\`tly݌`-ijI#!ɑ8 EŪb=/耘I{Gv2r 鯣 T}叆.w M"pGM^PS5lӷ!E[(vl5s>fM |%Ln齁cKXFQzT`it x#kOFGk 芽dWЪj6Xړ] 2ҫL8y%َ5Zb!=z D,5n =N mMԬgh^y;JZD_Q&r!fRʱTGQfI/(zPFX[핾Q3>򛗯P7jE~] LHH0Y}^[ >7B,BQ-J*"gJB.JTT"0QjFP"i&ХT%Q+*Nr˽ ,T\B5Ԫ/ܳB u4t\ *Kk pYs)4.ESw;w14i\ZKysiZ,-PҴ%F]_q3-x\:mgΥӸTbe:'ӸZF҅KϑKҸjkY1qi 6r9r",$< ,pP%b%25%y 08,R!wȕiAPL9J Z\>[,Ҹ]-NKY&B]XYkY3@G$Dp0w",4a(+u (HBkeJ+(K>FDFQօW)J ޮdFIIAKh#٭¬&RvD8FY55zO~ɒGɴXMh@}\Į@Gxٞ[FNm7`'Ko8@Ix&zL}"fR#g$%g9 KHyckKl6S;$cB`eDF#2(x$KX菱@|bI78)n?<ͽ[LZ>e\ˆl̅^ >OiuΓ="VA K4xU]>fCeS)*!w168Ozr *DjLf=?|b7)߱)\x"%bv,]f>ej=z᭡*ȕTXJb?H=^`nE'xqFiG.H3}T2;Es\5FnY6|< N~w]ݵE<6PT}PL+-I[\,Ϊ{'X^HFnD5~r5. KQ 8.b;)[-<5ŐNB+n}t5`zetD@1BU[LyZG3#DD{J@d1aq{wvmt? _>"G{};Mf;xcW(I D9>5qv( mIǭvV9]b_^mF-+ٟR6 Ǫ3AL _6ml=iVEb +ſCpJmItÙn0#l.LkTB/ӴV+@56yТMswTsݳhPoA|#5ujYݲ[OOϕaL\XMvłpնp^Xh\ R҈e<=dveFA(f#\-Z1yL -S9RY!YhHX]n[ ;Íн|p"a! ;Av$KLF eD.o<*^֡TcɵSY9ݾ kc$n+^/2~ 2E4),x#QOSj`]codؒ[lxbZb*aw}8 )R<\X$@z2@,1UNVK bӂJĪj{ůj@d0EDJ6ACȕcଇ2$UT)=Բ}她s?en?Uqe=+sRi5w02po)Pꯟn/߽I%0k|Ion>}(Te(_}uݟX%@Ӽɟ?.2b܁6*0Ld~~<Źm2JҍpV\ǟ']wˤ(HXoĉUzq3B$&rCԆcvDOGۗuT)mLwKGBJSZFπA* Ҽq>36V1lJ T$EuAHZ2UbvMj0$T#N佸M%T9_#󇬼t;`U18HFj:C5&BG<m<眑 :+ 1~"[LRHq3 %5eD%uP[y<t;";8OE M=q9 m\[ .яquWjd_хPZ(OMӉcZmPmغҤx#p iDɎڃ5tm='uT/N>0/?A )q`1ϟ?׏s Kw}Z E,U>Nɞd2淟3߬T8)'{2) ,`ZpjCwy1j؎֯eJV߽^Њ&,xf*aخOӅlK;ݤ&ǁK!]Hu48[¨=}?͞j?Ue4< VZh !NJj5doqdt'*lnڔCwlU"3 uMD{Oxr_>=&Fs5*r8VsA)fO|\yٟn(E٤-4WilTeV^y,cS M%a̹<88>bZ&ic.A[1eN$BAJa~LFH9B1><8֊R4׶ي{kP|u[Ny /bH)Tb4ˡnEn kU%wsLb"qon:U44/hrR(?z 9"${y^ݲ{y-zxWzsN_v6#ON#u3m#Еf8"zwݽ" ( a}D{!!C y܁kZt̳Уˡ\ qT|qb*KJ7Ym|%fL9Wze#TzV <a܎5a$8pAa;P?΂0D;>VB =zd5H20S9KkŠ=U-nsYD=xD0N; %+ 7^aJ>>Jo^$\(Q{9E ~e)MUWCBq}Q#nFKhZR-:`ٞ`"w!ir> 4439D I0XPIz$acyG}WrVI-K1R0%tG4Bph35&XܭIN%K@5(>Tv5ڠVŽP/]w=Gܻy@ލY 9ջǸ8zwݽ ( cJ:{z/!V 3dJ'OrZ$ym6]`ݱӼL>4/YaӼ qzHɴAQg-uohHnu v.~RP5qߣ:_Kg;]Gt+Qo!P ) "D*ђ;ۣ_2vM\kEI '~!}E9ejO~O qL S&=]w TtruΆDQUl,!v*S\RaD"GM` l_K)nu?!6CEx{D-&$[xp) \/sŌck1wB[rT?.ސ{e|oI.{½P1驼;et9Ɉm?_v7Wڠ y&TsD(=^h{jX=K yUh{#n~,0wh8ھzwݽ (w2؝$uV<r($%t]#Uv 3+?"O&=(4*k_v C}7/?[o_-7 7is1%vmy!Za!ߓ?)AQw$wa8ANu~T8i&x9/ 6HFY#5X<:FT[W]bkY鱠ޙ$f׹2Id7B"2 }Jo\)N DX]ޯyd0R;hLNy$#s4Y! lH'yj#|L<(h[ $D5s K8zgkneT 7Zg0D(D#Q^K l5(6-jb=Lz‰Qa[P>B0s5EUrfJWkla#]򑇥CV y> Oc3yjxs7Y0ô?Ve~ZM^M>dކ0DJ/#b3PeZ$"@T^S e!H/8h ~ o@htFjn[4hVP@s 9)E-  (E4r1o*]jb_-&Kѕ8'~})q(bӷ*O?/uɆ1Tt~ Q_`dq2 i_7S0Cu9 g׿|H"eXd]ߩ(¤_\+ET) ~1k)rr(ebad=J,)`ElӴVkzp"Nz<&Rh_]vגM$2HC%m2$!q~Qp wy/ƫu};6/l~}\~7]ͫ 6aC8+/{>=^zXο/L rYÐa x6#c8Ɯ4^&t@ ^oKi6E2}i6<肘q+`%4HɕsF'&2a "JJ| $ʖHHL qLA;=#ZKqS3RKl*=Cd=-4( NuNF]$ hGVuU70iɇi!SHf4& L띺@kO.@^Q*trHxG=QLYE܌6hGc9a ҋk~*Cyuo3n%~|sT!܌ZD5߇|qީ{|7H6na[Imwc$py{L3[Ӿ\|{lD\=g)7@[Hm:!5y8:˸)[$ü]Fn x5JڏX_I19%)(.:xt|FǤu^$}5wL,Yif>Zh0`O r"˜|9L".&~5h~5r^V0?~XS9NP;,}ȋYXTJ:bHx&L)JmZz<7 J{òFU#hTiAA }Gv:uT&s$9tk_S(ݺ9)N^|"g!⠄uھt;ҭ}rtC~,SET%&Dk|ϵ-YaOZVU`@뙈k!Is´"1}g5euvoYg#p1 -"F9mCbUϚJp ƇpfrWඅF`O $|j RL8#ӵ­NfnL1w.ZG7+`L@"I"#DAB {lT_5q `*_r6 I|ZG/ \''.5E`"BˎJJ=dB+zr _6DsJz'72 8dC:tXx9^ `5_meTj :-Z v2ɣgVo E7WTF!6Eӝ g i W@Js7 }15@ko̪n4,Xʱd^zL .#P*"Itt Zc*iR SEz|^x IL =jv[P.n㧋;J\!rc@1 X;dgsذF˶[SJv-Bk\;_<[tlq_ #)\mJ<罴m8`% :m|1!Zg >9\#ƒ'yˊ"bvxºk'>\E*&1GDTM9$"~ڲ->&GwӉU#֕jOADy+z@'LTݶP>tш*TUL0h|9XЉ)d -U{9Qc~^̋L2FKΦ`6FFwGF"#b g"*L1& :C,t(EJÛ`0f%z!*:|!?%7AI <۾Z5+4x]xW%rq1R1dCR:p,#k >HCx< J:JBDWf !S]óy\ rn!GEw^xd34BDcE0ݲ-6qSVo~h 2m^qPhI!Ku@V22ъOҜ y~i:C7y C  Jk -Z'soO`^ǯZ\TwH$-iHJ#ϴ> 11J"; Zť@0}<\eƥA`2fipeሼS\s^VFulXkŮ9D28#ZldP!FhO#Bj|n߻F, d ޥJiQ䓌,"*ya/"F59<d,{BR#!>pS9)(e'JnZv"5jlgmAƘ v,\Q:7.߯WVb4 3K #GDz),b<,c\m$Ǫ-— }Bn B&`dHmI9ˌeƞjx[u ]HdăH(h2Y!` ]D^lA:N%pWmsU{CuJBub($(N/ E ؄(~}C X:rȇ@չ޿Sj8a{cB b۔/Hw}[4z|HZsRcbK`<'1Ed4QyX4璼.ޣq"Ey]mu ]'bl/3N&zx5`iI!'\{ysdD{bN CJ90Jçl~jq.;] a1)sDy:SsOYGzCRjFiH. =#Vg-M$c\Q3Hӱ94V+b\kq$r3d=O"|x/eD``MǢb~Ϥgْ5ަ(+7RԸ@BJJ>v# 6V".Fukզy$+Z@U\o%o#̢̅BT"%Hf,lبD{z~xoJp!dYWv:F Z4ɦ|"]xR)R \coK')YTOb*C_Y(kbm>{+Fs‰ٹa])دb0/zߞ{|Yo__8,vh2_(W}h/G{~w{ѥɪ`r7Y_Wv^?ۢ.4^"^LʬVIe0b)z/^}-,KA#%CUX_'V:mX1{ <§,ͧxj,tRM<)I -6?/>r 3[԰OGuqS]+ӧƐyzi^Uc\\~uܞuo>[a{~AiB^]4~]SIkE?a-S++2x}rVW^Ǫwo (XtrWmΖQ#hup/ZV~Aw9`=p1һWg߾9}d7Fri1KVCϞ*Ƙg_fIbs۪ʬ?Wt͎U /+RWZyCIv6yY<[?Vd753bJ, Iyhxi:=V)?Gz4ߟa4y]WATB~a{cʣ,v6]ڬv$ˎOHTor +oK[ Յc (;L#F-'Fn ǹDWE oy*y{2Hk-MMZxsHk[q/^AyE|{0ͯŲ WM^Nmvzygg4p,^͙,|Ӗܡ#kHG6 n:ǝ&AsP%#Ōyu  ɧD0;dAsF6tw;*Z/7Q^xa8/+\^j~+Gєf5a^l2|Nn.^\v\kJ6/w&>%Lf]!3S'vRXVlfV5kF3qAnzIbs܂KTcg0ѐb,x\<]P ͡(NᲽ!<fovv'E'90oQx&l~=@*^Db~)P5oq7Aty1mC{޾t O>L֮j&|֛i5,uW—_|Bِ8L~  "5o|Q@&/%)'ѵǘfM7XdLrCyHG@69Lr{ĕ\b=SLdGyJO9 zᕺR|zޏOe%> T'ԭ}Epu_ )Q3p>#f\W1C*H};P .o(- 9E6Z894rVh4`0J}yK)ӓUBrIҗaUS/'3(J(s~RɔVFԳRq@gU 2&1z*3#](`9I>8*JƘGJDQ2& =RpK >|ԊVvhΰgJ]_Ya)<|xYRe{N(V8R=n h=HJ'4(l:TVe-bEm1^ P}Bou%tx7D\0|Ls@&APlܻ&fQF+0]Ht7|]3aXw;:x4=.BDtn |Y0:XEco:'ž>xA5oy.3.vc7F1hVH5$^N)miDLgQ;ыTPNxt}oF8?X CsEdpLgO0nKV)& w&+hp\Bրj"'4:k܀WПoxQ~"DIoth|']r+2gN<7;B:1Hc[V(~7>7`u&xhG,R+8=[ץd<ʛdHvL^"#N-+3?_E@:{{~2,)7&Jܚ+z㜗|CWZj=]p`Gy2Qyr=*f'GW#.g e`ȹQ(Z<#RӮozh ^L0Ld<Xu XFsxi@rovбͰK*iw{ݳ)|$la^Y!oZn1zxV3,>;v{[ + :wV:FtIaI8c[ $os~`u9/֗F;]8"<,E"~̈RkrvqM1?ޢ? L$&P CkHr`CM73"BjDddT*5"ww# H$)iX J%0A53rѝ^).s;NsGn݉B=ʩ (ڔD@Yǁ /`#޵E%,pj/"q5WKs$'קE[B88'!FDch9תUnt3)Mn]ἡ:7liHceiYK\wzDΡPs`}\cw}+ku0{n ҙ_=sh-N&EL;AS^c%&%/޶Dvg֭;T>5:Bzi0lgxz1KQQSg{6f S}7;5=[>>ߒ/%ؒ5rv2c,´r պCm+D): YdN j:tЊ,q52.V(RK+kc{S`3<Njݕ|V+yy+xi4ҳ>< +h-I.dJ7R?n\R[SGetZ5/;ZS!!?6)]/V-GM߂vkJ:h"/g5oPS!!?nTY溠-u;ב-U;e%_?`/U.;:ȡC?S*+(k}<>odQOX!9uM |<e q`.gׇe'} |d͗kC Ӈ呩2NH R>{)hrBw,V"]t>Hqh1R!pBNb#j/q&c\+2_ǹ:4Āi}.[_h-!C149҂3CSg=LǷߚ"e@V[Y>}%Y*ʾ@fwׅuaK-$ @q2RiƒU6v*R= OoeQݞr>+|B 6|0]H;G#HOܬ1t!Uo;ϑ* q3 !PQVs2lImڊ^MO%e?&a,?X!'L֡``*E&Rʅ,؛ K"'_mnQ$/",C9:kp'4:pˍGN\a:d1a nM7~"\n/-(6.ho."XrD 9(`Z sY7w&MJLD[4A}t61FGӈ OQ°X(i Y#Z,J[+w}J]O"EÈŊx @  6>` 60 W٬:E."qg -"P&ix ƬWF}l-\uPUF2mvq>|>9 Hpqn1XTwfOz8^e2QNcGGc0I"Y<"؊$ŚV|v9&)!$=Lzgd/tM^辦2 켸IGr:Zh$f"r6H>Cxi:Z>|x&c,0\ /(*ja=~\d=Ƌa+yL+02)\>yodmU|hF繽1xAhôeOX9x*)E 2^׊ž}}3hW;M.LW% :K63i+,9c74KY`MRm g ʣт]:5QPAdSwg"\F_}ᝍ`Bپ N~gx>[OF]_>}Ć'$ F2&y.c7Nۇ{"rj4]YB\Ky *B$gLp3$z+'\e-;ϩ!/ĶM Hc+㯵cǀ8}K;AH8>I(->/ܗ?#Qb /"fZg$Hp:UII&پt@ҶE[MbB)&y5Q{+vdGа"(6.'lS6A Dp$Bky+B F8E JD*6-cR,'/ۆ,u9|#pEF^Z>RnUbT+qo֟/?LcbOzi8͗LtU89O *pw.lgzʑ_12;ئKjf/ 1;/xMԶc,w:ߢ$ǶlsȎnǒu(~Uź+:Żw6Msc@郓5qdmr*zunރ~1A;Zۛ\+ݦ=axvљuÃ/hϒB1 1)L!Gknt4JuT7WNKG_/:]@XTعwnVJQoVx-O7)K)= At3RrangTfeGiiTo MY-!CM:ٺђi"LO3>|uRvZMbmGHbEƁ_iI:QkOڧiwvӺ^ZC9gmsњN]vc'噗:>^eHx靳q4H8ivz '7r۩*qh|[+. _-YC(Ŝ|$G-ͣrq+$:Tڻ,ANaRBp6e>Z@9_% w·Y|:gı@`DVrtl+Aplj.dxb Ee4dgSm_s= ºn {,]{O_o.Bsoxo_'o]-X+1&a{3Iu6w?EJJWS{#HuCwH{髐߹UH}CiL(!D~(ne`ҟ8H>o"q#sjf41@gaNC&OR&) m RŤV@}F h(xM, i #ָ t#'zNUSW#9gPV#QHOVlw6>#!^uY *CL9#{>iv mf4Ůy:8u6K@3$~}^k;hM5#R PP]mB \n0w,Z՚bD1Pk}Cc!v}º:G L9/9 f*SItM SqzM*.z;[ >Q">~;GI:[._BN% ]Y7 ݳzFoªZDcQV#F9R6dm]XДëǢ%=ӳi-L 33ʯo翮&ZcYA**NpH9rNih'R^Ż\Wu6$k "6$I⋱c]傽:JGyBbC.#Ɉ.JW:CL .-R|vr.xf= \ lT '!RtϏ[hoܳӄ9hT-QabH^9=2Y"%@"^Bk~4<,Z?==L`&>Ul*\%OJp^kD[((Hbf+}m$_+Y"qtߥ9$mj (&E)pe+ݻ|rNss#yV[o>zЃKs}2thgNd10W7}$ yGvpOlsEƂ ,"~IЮ,!9˅{rq x梭A_KY^ƕ$xw$ ϸxZnz1Bku×$zҏS BOi+ixASbwZ*) IX-2" V(UݫM"{Nvf5u)/[GӆGF}e2M@źdkA8xd]z_ؐ7Z5>-Jx^i>Sy}I #˖t_&h2K^/Y]ΏggAY./5eR/Ո?/_^5/Y] p)诵AƎؤ{"?-6/,,?l^Mo}DϿ {^I2Q|QΡ4T`Fk*0$5ӥ>JxG(y-N,oSΘZFwl3qB]'X$ٲ}n&I}WkOcPpYЩz(;C3}}k;]t@ϖq*߳)w%}Ѿ\QĶ$Ӧ8?)'_jgoͻJv9ۄu ?%F;%=džO|16 [֨ZNy6?TPi=xff.ڂUFK #G" Qh Wi@‹O7S$9{6"؍F,Ï/WeuY6ޮ@,O,'g"/'F 'lEE 7GTetV1.,3h;ܬ>;oTâ;31jQ'^?q6C4iөu!odLs@o+/kaw3맥] "Pq\eP~sYlsyAuӭ? _~jzݨ8U x%U֥RwzUc6jHچ[T_\;bJ#DvN{m_F_ܒGrV?qb3 ae}2!ŽۡU=ZlE?!du^13kpO]{\B[< 頌xUj_P`:Ι(d ֑bRfԞΗ~N@>nxIϞJgݼtOw5@Z.NEYifgP-:fHƣ/dҲ'w5b]~I\hYV֨MV]%F' LGOiHiF>Sӵb+,3 hzE`Vg)e]Y$>}4ߑES'ed+u %1+qwj?,RJSH;E9{Mn\L2Ek^ Z9S1,YL V/_3+(c1j?xyMn1|9B2rYD&h&!FLJd\eB4ªɸHfT1id 4]KajV%B#[ɧ[EY wC`8m#Y)Lz[S}47,BϨ:|_ŞvcZCzͨeHii11o./;׃f7|0 @$H;i5b0^wQx kb̺TP=cL#V֮on tL>ռYUsXǖZmX{Gf, #?=|ͮ'$N|4 ;?<?ϗ1l5?Miʻ>}>2FQ}'K !^@ڵ0[%Kq3/I+JHQ-(nn͊0k/~DΙ{ aU{zw;Jf~nA:3g/ntgLmB8}UovW, :nHFJ)?| (fh4kqe @}m&eZրu5מj-bo,@Q o-pZSA뙷Nh&>=Ə7q]Dgp^1/篞Fp9ybwܐs۠~ݻהkt_ݥ{>d~2_8l<3E`/S{J-)|u E`j|xkJ8~tX`yzQr< w!(RO&wUy)\Ov(t C6D٣Y%_6: EC~&!j0\A2_SMx H!`5ѡL6.Pz|gy~d[8(w<e_\>WQ~9i'X{VX5ji5%'.+"/?fڅJܩzJՂWSۼr!Oa<*?}t o۬JS| gr̶cqNR^W_96`6ۦg&UmydBk-lz(H aV]z:`4uBoHuN9BQ(^RXB1 &h[prEm΋`s< g|nȕlG9el e' W1SVE(;},l!;Q`J;J&l Z%H-XZbNƪ#16<^HƚV#g֠ 䨬QڨX wӠUC2$e֨p_~%<$W} V@? I.cY;{-MY`]>sD&,JpvwTy kV'~/-qkgaru환\9{z^ϕf\Imn'`U:f_u{],M*1G1fk8MH ^Tc"2ta5bsr09 0LV“γ!v\{|4}*+la/Kiݣem2W$&lL=+AIQ9פIȽe/+X '6&ˉL_jjj4lcr5T۽eI2S&E0NօPeAVȝBPhI iGFR @҅%|dq&)EsA8/CGCVlє &죛ބqv?YDߘO{pscv3T#KА"zM)Ҿ}c<9QW;:B*a}) L4h.uH0?wrmvBiwgl|ނXʖTeaJfc1*.E΁b2NJ. sk2PFjSZDo"ِ kG5֌X4Al?O <`YYݍ&3ل1IT6d9Էify]9fUGlcA<&'Ϟ^''Eyré32z)On$Q%K{y^-SF`@QتݻohfʨEr C&2&\wZc-l O\Dfo_*"OZ310$>7Wvx/LS}b5;z]yx]o؉5ג{{~(,3V! J h<ϵ'8sxІDS'^ٔXp'ӵ m-`BKnҷL0k3P"ua3)bl;^9s60Mz_0g 5PeN6?8Z'l5u8X`0߇rEO27?k/(tOwE*[!Ʉ!ѲrjIA cYJZ/ NA F[T"T(sX.W%D>JJ,}BaGRI$`+R68)S %]pv@4gnB3!\،fZMYAϤb 4ӅIb͔[υ`%V J k㙡F%0Lg$ Jf:}6DQh wӠriկUGP2WT+nlN!B?l\iWMchqvMCc_ےĩ!Ț<@*_6OG;ܰZߝ gbryUiWQ%=Z.8#rFp"XS5bWV=Nir&jVo 4γ!NcFfj{cm>/vઓ sfEʺ,(ɬrB+h #l)DP̏"~UgTN0bsC@Y-i Rc Rׅ9eu䔀ח%0uf]XpPוCyQ_)K,:R\-/& !p_+d"%`_MZjۖ)-6)tV J6\b-Ɛ@U@.X)l( gUZdꜗHc , xr8HMw.nq1;{7gp[?Gcuss=HH_Ųb ͩA9N`$*GQdsOPpd ^LTLTEJkDZ^wq$>@R'<.m5o s So &*:/˝|D*NQz4 'r-ģ3AT+HY"CHZZ#G#x$˫G~1#)aJrNۚrѭ-oQFJdJ#xֲ3ϣ*U!䈅JPJPs@F7GWU`^@Ѱ%R3޳U}F*l5mk| [X0ĉ 5DGЕޭ<,uT ud%s01t]jbRKӚAV䱂aV Ъ`t_\W]/n5?_6[Or9!(>^fn6q6l$x OO6XCzÿIkHy"E?$Yr | 0L˪y e+m2VP[&} ;;Qȵ|yЛs91+g{wxPbV V50)Yj:, * DTEŜ-1r5cdCao9I\Ծf-D[vM;=7@I<xG@ ;I;y}KS2!%OFYwzݔp4w~2>rJ,e,~+U2<#!ØT`?EǷJhL+*`pWQsb PdQ??Rr4[e˥ AH"f EJ(c%V~h!/R\pβ]`͹4j[{3:G3 U :eG}L͹[u3\ صc̃W:%PU0L|k.qISc4"SU>7WL9R1tiv|kv(7qa?l MKŽ6iL!l  шsݿP愖Jzb%r3^nk>dʃj-r$q"qB^OW"Y&:b.x fe0Di8D%[,?E^`=Bo!6̑l_Ck)ݪM Bq~ŷy&kS\cp.R-]Z(7ǭm,wXO ,ws.~ޣ'5+aB :amt$sp_mC:ZJ;#ujg!&ZZ'U-% v(3(皌9TynjW=K-॰5(o!X4i掠J&5Aũuu@zpP+RFxAmp7LҜBFo:a-@UqjoE*UYk1W  K>jTd +LpAR+ m+nr¶$J_ouJy14J7+ȝ9O-$" |eHJ i/ʮ'Ă{H{Obhq\R`ZR(N| "ܵD`D"#Aqa${;uo2syAdI13 F97g GS4|7jSej=53g# FAfF뛟`/o|ï%C8%ԮŅST-pvz4jfj?Qf G"F-&NSǕEJPL2V>seVZ^=+cJeMpbA';xV< ;l4rffG WQ9TZ~Lvخ bDbLyLQ!AP&+S ^C}Ypk%l~đtdݥ\ڱ]!.Ea S!G=\׷Դ:V.&y}uE=EtiD9P.;]O3uItD0A}FItѦ۵i ѭ#ZftC^ɩf;-SYĺ=Q<*q?>;SLw؋D(S84n4[HLrH(:h!D2n&7tFmgfd'c}tFz% 6uR?9!i nPtTne*& ,LhRјu:F[ 8פs~;jȲJf֘0N۴;;'iVl< U6fn66lCF.|y1&f29یHӵx*'sN;FDPĉ3Tߚ<{jeO[,I/KM cEΠ9iPiT_6$@" \CP+q'-ZwzĘIi MY=M*PL_ɭǡHf*Qf?-8j/y9Iy ](٘.q>X|9;z8I<}YJ=":[/gW4+qOLR6-͚vf kgzPﲂ}3uW4mVSK{Ui\w44sgPάՌ[-✉;Z,eލaM[̷=>)|dzM #$PP9Z+ h*z qγj-1ӃyM5)H55!Ӭ%͛×:| WO`9>$8^V0/⏚r:,WyVw۫]`ɺWX*EΈW-CjX\wVW?~JK ^6h'%"@v Hؖ-08 %AKZk[K*hDz~Y|r߯wQ$E ʂh8N."Bj4j8*NJ 9灅ta*(a J"V6j*F V,OjZbt`o!z| }o| }qa?l C6[s?Ԋ'ѭ7A>8Qhu$"!my5Vty}cuGGU*-f|}$+V=¹wHuw;Ki(&_>~Zע u/\\Ighə2s21MUƦҢZ=Pʑ5R(&Ou+L6ZuOt*lӯw0RzyM7uTO \iogUgt71e?P]-`,/_nKL)3nuY HZRuRH^9NT:;%\K 2ָ"ԙjl`}yLA"_@܍(S4H,/m Ƥ72b11#'fըu8dГF!tǦL?}8W\jƄ)15C.gzS(L_RII۔JṣLob@HpsE%ʣL_k̅Qb"h (ޱޛrD\Qe"~yTZ -<)PQG}u7tAqÊ&=2[ +<sx!E ?\NW念U Hu8smgEʇ8NjLfZМu@fn6Rݛ|2›m:F2#4cRL>@FAa-0YˎXP :D-&y"EϋK VNCkWP3bA{/+c{mM\WU-*JX,V]R^%  ޡ77$rbVz_O-nlH$L=6R`B><,ؿJZ,`遥FP^,n&fq I9[obVZ΀$Jk 8z/f ,rg^T+k %։Ymk9H MS︑X޹- XR 33K859+CLQƘD%m@)mUOf|Lo|[e!] +V5tFd Όf^#2hye% "rh|)\Tu񉡖۩ZX-c|o_@!Q|HO_b$H!VT]7q_ѵ>W|"oWɋ,E#\8{uV@&R$t [lwwhYC>x -`t^`EJ6R!pRS R:63\W:EؙԉRk~_o/] ãsa*o r=зŅNO/ߺ3=߇M}'0-sw7˅Djfb(gp{u9=g^Yȃ(ŦX/֝ƻie*߁nE_nMu@ݻ0- map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 07 07:50:08 crc kubenswrapper[4761]: body: Mar 07 07:50:08 crc kubenswrapper[4761]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:10.085787 +0000 UTC m=+6.994953505,LastTimestamp:2026-03-07 07:49:10.085787 +0000 UTC m=+6.994953505,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 07:50:08 crc kubenswrapper[4761]: > Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.948316 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa506e77a2f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:10.085876271 +0000 UTC m=+6.995042786,LastTimestamp:2026-03-07 07:49:10.085876271 +0000 UTC m=+6.995042786,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.955915 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 07 07:50:08 crc kubenswrapper[4761]: &Event{ObjectMeta:{kube-apiserver-crc.189a7fa6f9a3508d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 07 07:50:08 crc kubenswrapper[4761]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 07 07:50:08 crc kubenswrapper[4761]: Mar 07 07:50:08 crc kubenswrapper[4761]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:18.453239949 +0000 UTC m=+15.362406424,LastTimestamp:2026-03-07 07:49:18.453239949 +0000 UTC m=+15.362406424,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 07:50:08 crc kubenswrapper[4761]: > Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.963321 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa6f9a3cc12 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:18.45327157 +0000 UTC m=+15.362438045,LastTimestamp:2026-03-07 07:49:18.45327157 +0000 UTC m=+15.362438045,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.969350 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189a7fa6f9a3508d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 07 07:50:08 crc kubenswrapper[4761]: &Event{ObjectMeta:{kube-apiserver-crc.189a7fa6f9a3508d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 07 07:50:08 crc kubenswrapper[4761]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 07 07:50:08 crc kubenswrapper[4761]: Mar 07 07:50:08 crc kubenswrapper[4761]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:18.453239949 +0000 UTC m=+15.362406424,LastTimestamp:2026-03-07 07:49:18.457257686 +0000 UTC m=+15.366424201,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 07:50:08 crc kubenswrapper[4761]: > Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.974647 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189a7fa6f9a3cc12\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa6f9a3cc12 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:18.45327157 +0000 UTC m=+15.362438045,LastTimestamp:2026-03-07 07:49:18.457337998 +0000 UTC m=+15.366504513,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.981389 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189a7fa44dd92d60\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa44dd92d60 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.981154144 +0000 UTC m=+3.890320619,LastTimestamp:2026-03-07 07:49:18.796219966 +0000 UTC m=+15.705386441,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.987188 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189a7fa4592bddcd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa4592bddcd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:07.171122637 +0000 UTC m=+4.080289142,LastTimestamp:2026-03-07 07:49:19.122877581 +0000 UTC m=+16.032044056,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.992957 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189a7fa459c4b327\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa459c4b327 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:07.181138727 +0000 UTC m=+4.090305212,LastTimestamp:2026-03-07 07:49:19.178138991 +0000 UTC m=+16.087305466,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.000476 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 07 07:50:09 crc kubenswrapper[4761]: &Event{ObjectMeta:{kube-controller-manager-crc.189a7fa75afc703f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 07 07:50:09 crc kubenswrapper[4761]: body: Mar 07 07:50:09 crc kubenswrapper[4761]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:20.086470719 +0000 UTC m=+16.995637234,LastTimestamp:2026-03-07 07:49:20.086470719 +0000 UTC m=+16.995637234,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 07:50:09 crc kubenswrapper[4761]: > Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.007640 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa75afde769 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:20.086566761 +0000 UTC m=+16.995733266,LastTimestamp:2026-03-07 07:49:20.086566761 +0000 UTC m=+16.995733266,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.011774 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa75afc703f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 07 07:50:09 crc kubenswrapper[4761]: &Event{ObjectMeta:{kube-controller-manager-crc.189a7fa75afc703f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 07 07:50:09 crc kubenswrapper[4761]: body: Mar 07 07:50:09 crc kubenswrapper[4761]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:20.086470719 +0000 UTC m=+16.995637234,LastTimestamp:2026-03-07 07:49:30.085624829 +0000 UTC m=+26.994791344,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 07:50:09 crc kubenswrapper[4761]: > Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.018770 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa75afde769\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa75afde769 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:20.086566761 +0000 UTC m=+16.995733266,LastTimestamp:2026-03-07 07:49:30.085760713 +0000 UTC m=+26.994927238,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.025323 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa9af401d88 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:30.090126728 +0000 UTC m=+26.999293243,LastTimestamp:2026-03-07 07:49:30.090126728 +0000 UTC m=+26.999293243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.029465 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa3dfb93aad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa3dfb93aad openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.133566637 +0000 UTC m=+2.042733122,LastTimestamp:2026-03-07 07:49:30.839586081 +0000 UTC m=+27.748752596,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.033333 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa3fae74ba9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa3fae74ba9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.589570473 +0000 UTC m=+2.498736958,LastTimestamp:2026-03-07 07:49:31.072935309 +0000 UTC m=+27.982101784,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.038006 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa3ff0088a0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa3ff0088a0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.658333344 +0000 UTC m=+2.567499819,LastTimestamp:2026-03-07 07:49:31.103024014 +0000 UTC m=+28.012190489,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.043748 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa75afc703f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 07 07:50:09 crc kubenswrapper[4761]: &Event{ObjectMeta:{kube-controller-manager-crc.189a7fa75afc703f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 07 07:50:09 crc kubenswrapper[4761]: body: Mar 07 07:50:09 crc kubenswrapper[4761]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:20.086470719 +0000 UTC m=+16.995637234,LastTimestamp:2026-03-07 07:49:40.084996106 +0000 UTC m=+36.994162611,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 07:50:09 crc kubenswrapper[4761]: > Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.048141 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa75afde769\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa75afde769 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:20.086566761 +0000 UTC m=+16.995733266,LastTimestamp:2026-03-07 07:49:40.085049987 +0000 UTC m=+36.994216492,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.053599 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa75afc703f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 07 07:50:09 crc kubenswrapper[4761]: &Event{ObjectMeta:{kube-controller-manager-crc.189a7fa75afc703f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 07 07:50:09 crc kubenswrapper[4761]: body: Mar 07 07:50:09 crc kubenswrapper[4761]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:20.086470719 +0000 UTC m=+16.995637234,LastTimestamp:2026-03-07 07:49:50.085640621 +0000 UTC m=+46.994807126,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 07:50:09 crc kubenswrapper[4761]: > Mar 07 07:50:09 crc kubenswrapper[4761]: I0307 07:50:09.655605 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.086556 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.086675 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.653092 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.706397 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.708015 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.708055 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.708064 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.708583 4761 scope.go:117] "RemoveContainer" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" Mar 07 07:50:10 crc kubenswrapper[4761]: E0307 07:50:10.708758 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:50:11 crc kubenswrapper[4761]: I0307 07:50:11.654627 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:12 crc kubenswrapper[4761]: I0307 07:50:12.653906 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:13 crc kubenswrapper[4761]: I0307 07:50:13.654153 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:13 crc kubenswrapper[4761]: E0307 07:50:13.789409 4761 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 07:50:13 crc kubenswrapper[4761]: E0307 07:50:13.889073 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 07 07:50:13 crc kubenswrapper[4761]: I0307 07:50:13.892332 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:13 crc kubenswrapper[4761]: I0307 07:50:13.893624 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:13 crc kubenswrapper[4761]: I0307 07:50:13.893665 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:13 crc kubenswrapper[4761]: I0307 07:50:13.893684 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:13 crc kubenswrapper[4761]: I0307 07:50:13.893745 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:50:13 crc kubenswrapper[4761]: E0307 07:50:13.901263 4761 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 07 07:50:14 crc kubenswrapper[4761]: I0307 07:50:14.654895 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:15 crc kubenswrapper[4761]: I0307 07:50:15.652904 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:16 crc kubenswrapper[4761]: I0307 07:50:16.040208 4761 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 07 07:50:16 crc kubenswrapper[4761]: I0307 07:50:16.062167 4761 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 07 07:50:16 crc kubenswrapper[4761]: I0307 07:50:16.657416 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:17 crc kubenswrapper[4761]: I0307 07:50:17.091696 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:50:17 crc kubenswrapper[4761]: I0307 07:50:17.091990 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:17 crc kubenswrapper[4761]: I0307 07:50:17.093551 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:17 crc kubenswrapper[4761]: I0307 07:50:17.093601 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:17 crc kubenswrapper[4761]: I0307 07:50:17.093620 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:17 crc kubenswrapper[4761]: I0307 07:50:17.096177 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:50:17 crc kubenswrapper[4761]: I0307 07:50:17.653281 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:18 crc kubenswrapper[4761]: I0307 07:50:18.026336 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:18 crc kubenswrapper[4761]: I0307 07:50:18.027569 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:18 crc kubenswrapper[4761]: I0307 07:50:18.027868 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:18 crc kubenswrapper[4761]: I0307 07:50:18.027903 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:18 crc kubenswrapper[4761]: I0307 07:50:18.655119 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:18 crc kubenswrapper[4761]: W0307 07:50:18.774112 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 07 07:50:18 crc kubenswrapper[4761]: E0307 07:50:18.774173 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 07 07:50:18 crc kubenswrapper[4761]: I0307 07:50:18.953077 4761 csr.go:261] certificate signing request csr-cbsbn is approved, waiting to be issued Mar 07 07:50:19 crc kubenswrapper[4761]: I0307 07:50:19.654912 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.045577 4761 csr.go:257] certificate signing request csr-cbsbn is issued Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.155087 4761 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.530520 4761 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.902349 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.903762 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.903803 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.903816 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.903936 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.910655 4761 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.910899 4761 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 07 07:50:20 crc kubenswrapper[4761]: E0307 07:50:20.910924 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.919066 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.919120 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.919134 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.919155 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.919170 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:20Z","lastTransitionTime":"2026-03-07T07:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:20 crc kubenswrapper[4761]: E0307 07:50:20.929028 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.935258 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.935300 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.935310 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.935327 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.935337 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:20Z","lastTransitionTime":"2026-03-07T07:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:20 crc kubenswrapper[4761]: E0307 07:50:20.944377 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.949600 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.949638 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.949649 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.949665 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.949677 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:20Z","lastTransitionTime":"2026-03-07T07:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:20 crc kubenswrapper[4761]: E0307 07:50:20.957236 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.964265 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.964313 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.964325 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.964345 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.964360 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:20Z","lastTransitionTime":"2026-03-07T07:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:20 crc kubenswrapper[4761]: E0307 07:50:20.976628 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:20 crc kubenswrapper[4761]: E0307 07:50:20.976821 4761 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 07:50:20 crc kubenswrapper[4761]: E0307 07:50:20.976856 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: I0307 07:50:21.046999 4761 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-12 08:33:08.04284998 +0000 UTC Mar 07 07:50:21 crc kubenswrapper[4761]: I0307 07:50:21.047068 4761 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6720h42m46.995785693s for next certificate rotation Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.077898 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.178833 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.279563 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.380263 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.481323 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.581948 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.682804 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: I0307 07:50:21.705306 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:21 crc kubenswrapper[4761]: I0307 07:50:21.706653 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:21 crc kubenswrapper[4761]: I0307 07:50:21.706713 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:21 crc kubenswrapper[4761]: I0307 07:50:21.706769 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:21 crc kubenswrapper[4761]: I0307 07:50:21.707767 4761 scope.go:117] "RemoveContainer" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.708050 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.783037 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.884036 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.984689 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.085225 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.185863 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.286644 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.387607 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.487979 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.588193 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.689005 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.789135 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.889553 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.990389 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:23 crc kubenswrapper[4761]: E0307 07:50:23.091252 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:23 crc kubenswrapper[4761]: E0307 07:50:23.192251 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:23 crc kubenswrapper[4761]: E0307 07:50:23.292864 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.201818 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.201955 4761 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.302218 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.403115 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.503576 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.604162 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.705046 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.805747 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.906385 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.007389 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.107707 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.208112 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.308750 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.409819 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.510071 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.611161 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.711918 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.812982 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.913465 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.013995 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.114180 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.214890 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.315565 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.416138 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.516898 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.617828 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.718362 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.819472 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.920177 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.021016 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.121257 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.221640 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.322375 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.422782 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.523565 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.624504 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: I0307 07:50:27.705708 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:27 crc kubenswrapper[4761]: I0307 07:50:27.707134 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:27 crc kubenswrapper[4761]: I0307 07:50:27.707203 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:27 crc kubenswrapper[4761]: I0307 07:50:27.707225 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.724951 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.825644 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.926395 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.026557 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.127515 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.228013 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.328810 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.429266 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.530237 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.630799 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.731610 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.832261 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.932901 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:29 crc kubenswrapper[4761]: E0307 07:50:29.033712 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:29 crc kubenswrapper[4761]: E0307 07:50:29.134342 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.169319 4761 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 07 07:50:29 crc kubenswrapper[4761]: E0307 07:50:29.237767 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:29 crc kubenswrapper[4761]: E0307 07:50:29.338398 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:29 crc kubenswrapper[4761]: E0307 07:50:29.438829 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.536355 4761 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.541926 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.541970 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.541986 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.542009 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.542027 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:29Z","lastTransitionTime":"2026-03-07T07:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.644874 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.644943 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.644976 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.645007 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.645024 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:29Z","lastTransitionTime":"2026-03-07T07:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.748162 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.748234 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.748260 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.748289 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.748312 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:29Z","lastTransitionTime":"2026-03-07T07:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.851284 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.851323 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.851334 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.851353 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.851364 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:29Z","lastTransitionTime":"2026-03-07T07:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.954270 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.954310 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.954322 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.954337 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.954348 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:29Z","lastTransitionTime":"2026-03-07T07:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.056452 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.056529 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.056549 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.056575 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.056594 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.159911 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.159998 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.160023 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.160063 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.160089 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.204138 4761 apiserver.go:52] "Watching apiserver" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.210440 4761 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.210853 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.211367 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.211523 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.211623 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.211689 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.212161 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.212283 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.212425 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.212585 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.212711 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.216103 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.216120 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.216108 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.216239 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.218006 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.218500 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.218839 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.219248 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.220217 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.249431 4761 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.258559 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.262410 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.262459 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.262477 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.262503 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.262524 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.269550 4761 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.277742 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.288760 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.303512 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.315909 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346532 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346600 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346657 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346693 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346746 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346776 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346808 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346840 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346872 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346905 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346940 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346971 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347000 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347032 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347033 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347061 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347143 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347174 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347196 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347225 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347250 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347274 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347297 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347321 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347342 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347364 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347386 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347408 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347431 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347453 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347474 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347497 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347518 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347540 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347563 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347586 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347611 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347633 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347656 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347069 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347046 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347741 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347268 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347291 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347296 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347328 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347766 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347791 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347422 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347460 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347476 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347471 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347559 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347679 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347685 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347909 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347941 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347745 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347984 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347992 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348001 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348062 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348100 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348125 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348150 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348161 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348173 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348198 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348222 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348252 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348275 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348296 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348318 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348343 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348367 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348392 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348414 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348438 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348460 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348481 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348502 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348604 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348631 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348652 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348675 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348699 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348739 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348763 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348788 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348810 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348829 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348850 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348870 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348894 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348914 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348939 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348964 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348987 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349013 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349033 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349059 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349080 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349101 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349123 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349145 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349168 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349192 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349217 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349239 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349260 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349285 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349308 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349334 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349355 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349375 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349398 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349421 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349444 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349465 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349488 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349510 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349535 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349558 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349583 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349625 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349649 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349671 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349695 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349823 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349848 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349869 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349895 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349937 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.353883 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.358260 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.372106 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.372593 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.372963 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.373379 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.373806 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.373844 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.373876 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.373907 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376441 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376472 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376483 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376502 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376515 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348176 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348220 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348223 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348243 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348379 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348424 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349210 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349384 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349472 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349508 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349570 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378102 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349583 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349664 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349703 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.371851 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.372511 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.372903 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.373236 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.373701 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374102 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374255 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374268 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374189 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374528 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.374534 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:50:30.874507723 +0000 UTC m=+87.783674218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374461 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374636 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374990 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375036 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375113 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375323 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375519 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375663 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375666 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375769 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375798 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376389 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376500 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376509 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376613 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376798 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376993 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377119 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377115 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377143 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377259 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377433 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377452 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377575 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377880 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378327 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378356 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378423 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378465 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378487 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378493 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378578 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378649 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378681 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378706 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378750 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378772 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378794 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378813 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378835 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378859 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378881 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378904 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378926 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378949 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378974 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379012 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379059 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379086 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379113 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379139 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379163 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379189 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379212 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379235 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379259 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379281 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379303 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379326 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379349 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379372 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379400 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379426 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379453 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379476 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379501 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379527 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379552 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379580 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379604 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379626 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379733 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379810 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379837 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379861 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379979 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380010 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380135 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380168 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380198 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378570 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377862 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378670 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377973 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378142 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378888 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378906 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.381504 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379055 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379100 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379142 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379545 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379767 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379783 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379781 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379816 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380069 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380689 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380739 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380962 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380973 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.381157 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.381411 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.381826 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.381843 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.381431 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.381921 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.382467 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.382525 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.382595 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.382624 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.382955 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.383215 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384008 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384057 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384074 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384303 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384358 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384496 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384615 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384644 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384751 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384920 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384974 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384993 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384974 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385083 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385110 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385145 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385136 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385284 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385327 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385325 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385372 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385398 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385421 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385447 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386665 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386841 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385599 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385494 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385597 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385793 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385825 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385887 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385928 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386088 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386105 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386962 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387065 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387162 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387301 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387449 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387472 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387641 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387675 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387762 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386538 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386760 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386789 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388141 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388308 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388334 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388345 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386424 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388516 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388588 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388364 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388646 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388670 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389015 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389069 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389243 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389361 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389437 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389476 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389520 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389566 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389626 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389666 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389707 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389805 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389978 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.390230 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.390303 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.390359 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.390399 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.390665 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.390743 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.390793 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.391169 4761 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.391771 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.391776 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.391860 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.391901 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.391930 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.391965 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392013 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392035 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392120 4761 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392142 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392155 4761 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392169 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392181 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392195 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392208 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392221 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392235 4761 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392248 4761 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392264 4761 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392281 4761 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392298 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392316 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392332 4761 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392348 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392362 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392375 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392379 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392389 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.392432 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392445 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392468 4761 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.392446 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.392487 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:30.892469857 +0000 UTC m=+87.801636352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392546 4761 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392566 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.392600 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:30.89257545 +0000 UTC m=+87.801742045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392618 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392634 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392647 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392659 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392671 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392684 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392697 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392710 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392664 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392745 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392800 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392821 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392841 4761 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392861 4761 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392877 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392894 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392910 4761 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392927 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392945 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392963 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393172 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393195 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393213 4761 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393234 4761 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393251 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393268 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393285 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393303 4761 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393323 4761 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393338 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393354 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393372 4761 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393390 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393408 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393425 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393442 4761 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393597 4761 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393646 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393664 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393681 4761 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393698 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393964 4761 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393981 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393997 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394018 4761 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394035 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394053 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394071 4761 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394088 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394105 4761 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394121 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394141 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394158 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394175 4761 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394191 4761 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394210 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394227 4761 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394244 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394260 4761 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394277 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394294 4761 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394310 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394327 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394344 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394363 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394380 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394396 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394415 4761 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394430 4761 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394445 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394460 4761 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394477 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394494 4761 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394509 4761 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394525 4761 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394540 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394556 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394572 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394589 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394605 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394620 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394946 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394965 4761 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394981 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394999 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395015 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395034 4761 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395051 4761 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395081 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395098 4761 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395114 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395804 4761 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395830 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395844 4761 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395857 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395871 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395887 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395899 4761 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395911 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395923 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395935 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395946 4761 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395958 4761 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395971 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395983 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395995 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396007 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396019 4761 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396032 4761 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396044 4761 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396056 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396068 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396079 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396091 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396103 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396119 4761 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396136 4761 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396152 4761 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396167 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396183 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396198 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396213 4761 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396231 4761 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396247 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396262 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396278 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396293 4761 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396308 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396324 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396342 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396358 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396373 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.406694 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.406754 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.406770 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.406848 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:30.906828193 +0000 UTC m=+87.815994668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.407792 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.407820 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.407901 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.407920 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.407981 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:30.907960921 +0000 UTC m=+87.817127466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.408014 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.408876 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.409562 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.411070 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.411266 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.411387 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.412431 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.412555 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.412599 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.412731 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.414882 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.414981 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.415133 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.415287 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.415901 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.415992 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.416025 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.418842 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.418871 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.419058 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.419264 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.421209 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.422887 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.423166 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.423225 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.423240 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.423290 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.423390 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.423596 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.423850 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.424084 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.424367 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.425048 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.425525 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.425602 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.426131 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.426163 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.426388 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.426762 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.427165 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.428814 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.440260 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.440825 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.444858 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.455642 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.479866 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.479919 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.479936 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.479961 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.479979 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497243 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497314 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497371 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497392 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497451 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497463 4761 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497472 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497481 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497491 4761 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497502 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497513 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497525 4761 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497535 4761 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497547 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497558 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497568 4761 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497579 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497591 4761 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497604 4761 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497648 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497659 4761 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497670 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497680 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497689 4761 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497699 4761 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497708 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497753 4761 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497765 4761 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497776 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497786 4761 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497797 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497807 4761 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497817 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497827 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497838 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497848 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497859 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497871 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497882 4761 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497893 4761 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497904 4761 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497915 4761 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497927 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497937 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.541298 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.555935 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:30 crc kubenswrapper[4761]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:30 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:30 crc kubenswrapper[4761]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 07 07:50:30 crc kubenswrapper[4761]: source /etc/kubernetes/apiserver-url.env Mar 07 07:50:30 crc kubenswrapper[4761]: else Mar 07 07:50:30 crc kubenswrapper[4761]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 07 07:50:30 crc kubenswrapper[4761]: exit 1 Mar 07 07:50:30 crc kubenswrapper[4761]: fi Mar 07 07:50:30 crc kubenswrapper[4761]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 07 07:50:30 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:30 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.557782 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.560704 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: W0307 07:50:30.571599 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-c13ea7ea2720f74106916082d09bfc695fb55187feb4b9f073f6156c185ac096 WatchSource:0}: Error finding container c13ea7ea2720f74106916082d09bfc695fb55187feb4b9f073f6156c185ac096: Status 404 returned error can't find the container with id c13ea7ea2720f74106916082d09bfc695fb55187feb4b9f073f6156c185ac096 Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.573861 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:30 crc kubenswrapper[4761]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:30 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:30 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:30 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:30 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:30 crc kubenswrapper[4761]: fi Mar 07 07:50:30 crc kubenswrapper[4761]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 07 07:50:30 crc kubenswrapper[4761]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 07 07:50:30 crc kubenswrapper[4761]: ho_enable="--enable-hybrid-overlay" Mar 07 07:50:30 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 07 07:50:30 crc kubenswrapper[4761]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 07 07:50:30 crc kubenswrapper[4761]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 07 07:50:30 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:30 crc kubenswrapper[4761]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 07 07:50:30 crc kubenswrapper[4761]: --webhook-host=127.0.0.1 \ Mar 07 07:50:30 crc kubenswrapper[4761]: --webhook-port=9743 \ Mar 07 07:50:30 crc kubenswrapper[4761]: ${ho_enable} \ Mar 07 07:50:30 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:50:30 crc kubenswrapper[4761]: --disable-approver \ Mar 07 07:50:30 crc kubenswrapper[4761]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 07 07:50:30 crc kubenswrapper[4761]: --wait-for-kubernetes-api=200s \ Mar 07 07:50:30 crc kubenswrapper[4761]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 07 07:50:30 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:30 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:30 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.576664 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:30 crc kubenswrapper[4761]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:30 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:30 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:30 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:30 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:30 crc kubenswrapper[4761]: fi Mar 07 07:50:30 crc kubenswrapper[4761]: Mar 07 07:50:30 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 07 07:50:30 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:30 crc kubenswrapper[4761]: --disable-webhook \ Mar 07 07:50:30 crc kubenswrapper[4761]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 07 07:50:30 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:30 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:30 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.578290 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.580505 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.582109 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.582138 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.582148 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.582163 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.582172 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: W0307 07:50:30.591489 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-802b7bab9a85efff5f820c4441940fc1a00a9a2257c4ee2e9c2ed1dd9b4b22d8 WatchSource:0}: Error finding container 802b7bab9a85efff5f820c4441940fc1a00a9a2257c4ee2e9c2ed1dd9b4b22d8: Status 404 returned error can't find the container with id 802b7bab9a85efff5f820c4441940fc1a00a9a2257c4ee2e9c2ed1dd9b4b22d8 Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.593664 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.594958 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.684924 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.684968 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.684985 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.685006 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.685023 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.787579 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.787628 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.787644 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.787665 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.787684 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.890278 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.890318 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.890326 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.890342 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.890351 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.901525 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.901603 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.901628 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.901702 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.901731 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:50:31.901687538 +0000 UTC m=+88.810854013 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.901763 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:31.901756109 +0000 UTC m=+88.810922584 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.901817 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.901896 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:31.901877292 +0000 UTC m=+88.811043777 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.992430 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.992488 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.992504 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.992527 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.992543 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.002124 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.002202 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002400 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002436 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002452 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002475 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002503 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002476 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002585 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:32.002557066 +0000 UTC m=+88.911723591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002686 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:32.002661168 +0000 UTC m=+88.911827683 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.095312 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.095381 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.095400 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.095425 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.095445 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.198625 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.198678 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.198688 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.198703 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.198728 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.210152 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.210214 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.210232 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.210256 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.210274 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.228891 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.234098 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.234180 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.234198 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.234226 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.234249 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.253416 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.254119 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"802b7bab9a85efff5f820c4441940fc1a00a9a2257c4ee2e9c2ed1dd9b4b22d8"} Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.256935 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.257456 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c13ea7ea2720f74106916082d09bfc695fb55187feb4b9f073f6156c185ac096"} Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.258121 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.261061 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.261122 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.261145 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.261171 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.261195 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.261391 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:31 crc kubenswrapper[4761]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:31 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:31 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:31 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:31 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:31 crc kubenswrapper[4761]: fi Mar 07 07:50:31 crc kubenswrapper[4761]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 07 07:50:31 crc kubenswrapper[4761]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 07 07:50:31 crc kubenswrapper[4761]: ho_enable="--enable-hybrid-overlay" Mar 07 07:50:31 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 07 07:50:31 crc kubenswrapper[4761]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 07 07:50:31 crc kubenswrapper[4761]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 07 07:50:31 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:31 crc kubenswrapper[4761]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 07 07:50:31 crc kubenswrapper[4761]: --webhook-host=127.0.0.1 \ Mar 07 07:50:31 crc kubenswrapper[4761]: --webhook-port=9743 \ Mar 07 07:50:31 crc kubenswrapper[4761]: ${ho_enable} \ Mar 07 07:50:31 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:50:31 crc kubenswrapper[4761]: --disable-approver \ Mar 07 07:50:31 crc kubenswrapper[4761]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 07 07:50:31 crc kubenswrapper[4761]: --wait-for-kubernetes-api=200s \ Mar 07 07:50:31 crc kubenswrapper[4761]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 07 07:50:31 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:31 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:31 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.266934 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ade0baa99af32b4a3c886232d8232db7cc43cf8af7498d64d7e88e60966b373e"} Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.268505 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:31 crc kubenswrapper[4761]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:31 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:31 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:31 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:31 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:31 crc kubenswrapper[4761]: fi Mar 07 07:50:31 crc kubenswrapper[4761]: Mar 07 07:50:31 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 07 07:50:31 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:31 crc kubenswrapper[4761]: --disable-webhook \ Mar 07 07:50:31 crc kubenswrapper[4761]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 07 07:50:31 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:31 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:31 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.270024 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:31 crc kubenswrapper[4761]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:31 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:31 crc kubenswrapper[4761]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 07 07:50:31 crc kubenswrapper[4761]: source /etc/kubernetes/apiserver-url.env Mar 07 07:50:31 crc kubenswrapper[4761]: else Mar 07 07:50:31 crc kubenswrapper[4761]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 07 07:50:31 crc kubenswrapper[4761]: exit 1 Mar 07 07:50:31 crc kubenswrapper[4761]: fi Mar 07 07:50:31 crc kubenswrapper[4761]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 07 07:50:31 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:31 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.270166 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.271175 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.273900 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.286212 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.287893 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.294245 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.294289 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.294307 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.294330 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.294348 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.302076 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.314902 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.316072 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.320003 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.320086 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.320111 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.320142 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.320165 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.334120 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.337341 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.337573 4761 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.343328 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.343393 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.343405 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.343423 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.343458 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.351044 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.367571 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.383084 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.398833 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.419243 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.436065 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.451268 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.451342 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.451364 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.451390 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.451409 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.452532 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.554920 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.554980 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.554998 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.555022 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.555040 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.657830 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.657891 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.657909 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.657934 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.657951 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.705038 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.705061 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.705061 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.705512 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.705626 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.705817 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.712553 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.713339 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.715218 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.716249 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.717807 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.718503 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.719341 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.720585 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.721419 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.722783 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.723489 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.725022 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.726014 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.727011 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.728630 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.729808 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.731283 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.731842 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.732622 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.734001 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.734617 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.735961 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.736586 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.738102 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.738669 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.739508 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.741068 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.741840 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.743659 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.744624 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.746523 4761 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.746794 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.750176 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.751194 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.752072 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.754375 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.755771 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.758915 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.760301 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.760969 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.761020 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.761037 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.761060 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.761077 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.762548 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.763504 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.765480 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.766909 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.768862 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.769832 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.771751 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.772774 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.775063 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.776029 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.777865 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.778932 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.779971 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.781536 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.782579 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.784258 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.863707 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.863773 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.863787 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.863805 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.863818 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.910666 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.910825 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.910867 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:50:33.91083858 +0000 UTC m=+90.820005055 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.910961 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.911049 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:33.911020904 +0000 UTC m=+90.820187419 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.910961 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.911052 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.911198 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:33.911169738 +0000 UTC m=+90.820336243 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.966836 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.966887 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.966904 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.966928 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.966945 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.011516 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.011598 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011746 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011784 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011788 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011809 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011819 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011826 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011907 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:34.011875972 +0000 UTC m=+90.921042477 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011947 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:34.011929564 +0000 UTC m=+90.921096079 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.069842 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.069903 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.069922 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.069948 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.069965 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.172989 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.173215 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.173241 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.173264 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.173281 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.275402 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.275470 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.275498 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.275527 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.275549 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.378924 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.378982 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.378999 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.379021 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.379039 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.480994 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.481066 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.481076 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.481093 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.481103 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.584343 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.584396 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.584413 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.584438 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.584456 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.687233 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.687295 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.687313 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.687336 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.687353 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.720187 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.720331 4761 scope.go:117] "RemoveContainer" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.720709 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.789386 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.789439 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.789455 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.789480 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.789500 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.891372 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.891433 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.891449 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.891473 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.891491 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.993642 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.993706 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.993749 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.993785 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.993820 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.096343 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.096411 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.096428 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.096452 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.096476 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.199839 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.199884 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.199896 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.199918 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.199933 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.272547 4761 scope.go:117] "RemoveContainer" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.272697 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.302770 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.302853 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.302876 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.302911 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.302935 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.405566 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.405630 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.405648 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.405675 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.405694 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.508557 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.508602 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.508614 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.508630 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.508642 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.611951 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.612028 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.612043 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.612062 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.612074 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.705369 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.705417 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.705503 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.705424 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.705574 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.705863 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.713753 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.713782 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.713792 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.713807 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.713816 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.724620 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.733913 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.742844 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.751771 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.760607 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.769613 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.778170 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.789214 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.816161 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.816237 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.816264 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.816289 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.816309 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.918470 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.918537 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.918555 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.918582 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.918602 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.928163 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.928227 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.928254 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.928331 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.928381 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:37.928367745 +0000 UTC m=+94.837534220 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.928418 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:50:37.928388326 +0000 UTC m=+94.837554831 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.928552 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.928624 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:37.928607491 +0000 UTC m=+94.837773996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.021849 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.021890 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.021899 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.021915 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.021924 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.029240 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.029574 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.029471 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.029942 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.030092 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.030287 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:38.030263979 +0000 UTC m=+94.939430484 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.029753 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.030552 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.030662 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.030894 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:38.030875994 +0000 UTC m=+94.940042509 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.124449 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.124490 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.124501 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.124513 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.124526 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.228041 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.228421 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.228634 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.228883 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.229032 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.332307 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.332363 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.332380 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.332402 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.332418 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.434226 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.434254 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.434262 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.434276 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.434286 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.536182 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.536503 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.536604 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.536734 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.536836 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.639547 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.640038 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.640204 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.640350 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.640498 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.742659 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.742700 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.742708 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.742739 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.742748 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.845266 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.845294 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.845301 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.845314 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.845323 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.947592 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.947621 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.947629 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.947642 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.947651 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.050102 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.050127 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.050220 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.050233 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.050242 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.152802 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.152841 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.152850 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.152867 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.152877 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.255892 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.255940 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.255949 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.255965 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.255975 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.358558 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.358624 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.358643 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.358667 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.358687 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.461469 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.461556 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.461583 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.461614 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.461636 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.565160 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.565223 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.565245 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.565279 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.565302 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.667766 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.667818 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.667835 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.667861 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.667878 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.707890 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:35 crc kubenswrapper[4761]: E0307 07:50:35.708010 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.708040 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:35 crc kubenswrapper[4761]: E0307 07:50:35.708107 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.708116 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:35 crc kubenswrapper[4761]: E0307 07:50:35.708327 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.770936 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.771010 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.771032 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.771065 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.771104 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.874004 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.874068 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.874090 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.874120 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.874143 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.977583 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.977645 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.977671 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.977702 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.977756 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.080644 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.080685 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.080695 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.080709 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.080738 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.183610 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.183646 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.183659 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.183679 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.183692 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.286494 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.286554 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.286574 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.286598 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.286617 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.389115 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.389183 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.389201 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.389225 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.389242 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.492865 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.492935 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.492957 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.492979 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.492997 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.595794 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.595864 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.595883 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.595907 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.595925 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.699306 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.699399 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.699417 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.699795 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.699834 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.803396 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.803464 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.803482 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.803507 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.803524 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.906829 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.906883 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.906895 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.906913 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.906925 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.008990 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.009047 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.009065 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.009087 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.009105 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.112366 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.112440 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.112460 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.112486 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.112502 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.216384 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.216443 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.216459 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.216483 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.216500 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.319708 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.319809 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.319866 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.319888 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.319939 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.422190 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.422218 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.422226 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.422263 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.422274 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.524203 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.524247 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.524264 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.524286 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.524302 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.627546 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.627620 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.627642 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.627674 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.627698 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.705704 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.705807 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.705807 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.705932 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.706113 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.706279 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.730160 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.730220 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.730230 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.730248 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.730260 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.832992 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.833029 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.833041 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.833054 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.833063 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.935902 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.935990 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.936015 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.936044 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.936068 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.965071 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.965153 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.965189 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.965278 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.965301 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:50:45.965269211 +0000 UTC m=+102.874435726 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.965344 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:45.965330713 +0000 UTC m=+102.874497228 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.965359 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.965467 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:45.965440636 +0000 UTC m=+102.874607151 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.039414 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.039462 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.039481 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.039504 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.039521 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.065914 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.065959 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066104 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066123 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066135 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066135 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066179 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066190 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:46.066174369 +0000 UTC m=+102.975340854 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066200 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066275 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:46.066250841 +0000 UTC m=+102.975417356 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.142905 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.142983 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.143009 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.143044 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.143070 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.245537 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.245588 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.245597 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.245611 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.245620 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.348479 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.348545 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.348568 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.348598 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.348620 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.450739 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.450769 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.450779 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.450798 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.450808 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.553265 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.553317 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.553333 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.553355 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.553373 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.655815 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.655862 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.655876 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.655892 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.655905 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.759407 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.759471 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.759491 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.759516 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.759537 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.862164 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.862221 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.862232 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.862266 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.862279 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.965251 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.965309 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.965330 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.965359 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.965381 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.068110 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.068159 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.068175 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.068198 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.068215 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.170826 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.170887 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.170904 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.170927 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.170944 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.274271 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.274387 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.274409 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.274441 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.274458 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.378147 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.378356 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.378391 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.378423 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.378445 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.481676 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.481742 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.481755 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.481775 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.481790 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.583927 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.583991 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.584009 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.584033 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.584051 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.686839 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.686910 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.686931 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.686954 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.686972 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.705348 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.705417 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.705444 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:39 crc kubenswrapper[4761]: E0307 07:50:39.705514 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:39 crc kubenswrapper[4761]: E0307 07:50:39.705639 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:39 crc kubenswrapper[4761]: E0307 07:50:39.705862 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.789773 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.789821 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.789832 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.789853 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.789866 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.893122 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.893165 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.893174 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.893189 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.893200 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.995949 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.996019 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.996039 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.996063 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.996081 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.098390 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.098437 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.098451 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.098467 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.098479 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.201080 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.201126 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.201136 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.201152 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.201162 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.303935 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.303976 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.303988 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.304003 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.304017 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.406858 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.406920 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.406942 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.406965 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.406982 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.510017 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.510064 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.510076 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.510096 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.510111 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.613009 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.613094 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.613117 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.613148 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.613170 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.715794 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.715870 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.715893 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.715921 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.716019 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.819218 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.819278 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.819295 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.819322 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.819339 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.922157 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.922205 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.922217 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.922234 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.922246 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.025204 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.025261 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.025298 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.025332 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.025356 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.128224 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.128310 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.128336 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.128365 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.128383 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.236073 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.236131 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.236149 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.236172 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.236190 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.338653 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.338707 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.338751 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.338777 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.338797 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.426247 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.426302 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.426321 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.426345 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.426363 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.442019 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.447473 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.447519 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.447538 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.447562 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.447580 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.458764 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.463578 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.463692 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.463750 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.463780 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.463801 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.478975 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.483702 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.483785 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.483805 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.483833 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.483850 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.498249 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.503158 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.503275 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.503342 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.503375 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.503396 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.518440 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.518665 4761 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.520955 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.521008 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.521026 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.521049 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.521065 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.624069 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.624114 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.624131 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.624153 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.624169 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.705379 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.705528 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.705563 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.705675 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.705949 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.706106 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.708076 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:41 crc kubenswrapper[4761]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:41 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:41 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:41 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:41 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:41 crc kubenswrapper[4761]: fi Mar 07 07:50:41 crc kubenswrapper[4761]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 07 07:50:41 crc kubenswrapper[4761]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 07 07:50:41 crc kubenswrapper[4761]: ho_enable="--enable-hybrid-overlay" Mar 07 07:50:41 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 07 07:50:41 crc kubenswrapper[4761]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 07 07:50:41 crc kubenswrapper[4761]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 07 07:50:41 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:41 crc kubenswrapper[4761]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 07 07:50:41 crc kubenswrapper[4761]: --webhook-host=127.0.0.1 \ Mar 07 07:50:41 crc kubenswrapper[4761]: --webhook-port=9743 \ Mar 07 07:50:41 crc kubenswrapper[4761]: ${ho_enable} \ Mar 07 07:50:41 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:50:41 crc kubenswrapper[4761]: --disable-approver \ Mar 07 07:50:41 crc kubenswrapper[4761]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 07 07:50:41 crc kubenswrapper[4761]: --wait-for-kubernetes-api=200s \ Mar 07 07:50:41 crc kubenswrapper[4761]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 07 07:50:41 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:41 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:41 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.712033 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:41 crc kubenswrapper[4761]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:41 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:41 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:41 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:41 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:41 crc kubenswrapper[4761]: fi Mar 07 07:50:41 crc kubenswrapper[4761]: Mar 07 07:50:41 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 07 07:50:41 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:41 crc kubenswrapper[4761]: --disable-webhook \ Mar 07 07:50:41 crc kubenswrapper[4761]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 07 07:50:41 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:41 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:41 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.713346 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.729217 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.729278 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.729300 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.729328 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.729348 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.832103 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.832155 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.832172 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.832195 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.832214 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.935836 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.935901 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.935924 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.935950 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.935971 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.039332 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.039394 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.039411 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.039920 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.039973 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.142652 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.143397 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.143445 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.143511 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.143530 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.247022 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.247243 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.247276 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.247308 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.247330 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.349836 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.349874 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.349889 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.349906 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.349917 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.452708 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.452789 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.452818 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.452842 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.452857 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.554959 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.555035 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.555055 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.555080 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.555137 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.658834 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.658910 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.658933 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.658962 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.658985 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.762215 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.762275 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.762292 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.762314 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.762331 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.865672 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.865764 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.865783 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.865806 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.865823 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.969589 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.969647 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.969664 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.969688 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.969705 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.063595 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bfzp8"] Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.064417 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.070202 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.070981 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.072498 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.074321 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.074530 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.074703 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.074925 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.075076 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.100044 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.116182 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.127926 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.145564 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.163831 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.180805 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.180869 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.180896 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.180926 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.180946 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.181527 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.197425 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.213768 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.213914 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t7fk\" (UniqueName: \"kubernetes.io/projected/b293cb75-0655-49e5-811c-14da8b769d26-kube-api-access-9t7fk\") pod \"node-resolver-bfzp8\" (UID: \"b293cb75-0655-49e5-811c-14da8b769d26\") " pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.214014 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b293cb75-0655-49e5-811c-14da8b769d26-hosts-file\") pod \"node-resolver-bfzp8\" (UID: \"b293cb75-0655-49e5-811c-14da8b769d26\") " pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.229669 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.284167 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.284225 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.284243 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.284267 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.284286 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.314434 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b293cb75-0655-49e5-811c-14da8b769d26-hosts-file\") pod \"node-resolver-bfzp8\" (UID: \"b293cb75-0655-49e5-811c-14da8b769d26\") " pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.314497 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t7fk\" (UniqueName: \"kubernetes.io/projected/b293cb75-0655-49e5-811c-14da8b769d26-kube-api-access-9t7fk\") pod \"node-resolver-bfzp8\" (UID: \"b293cb75-0655-49e5-811c-14da8b769d26\") " pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.314702 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b293cb75-0655-49e5-811c-14da8b769d26-hosts-file\") pod \"node-resolver-bfzp8\" (UID: \"b293cb75-0655-49e5-811c-14da8b769d26\") " pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.353305 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t7fk\" (UniqueName: \"kubernetes.io/projected/b293cb75-0655-49e5-811c-14da8b769d26-kube-api-access-9t7fk\") pod \"node-resolver-bfzp8\" (UID: \"b293cb75-0655-49e5-811c-14da8b769d26\") " pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.387032 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.387109 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.387132 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.387163 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.387186 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.392444 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: W0307 07:50:43.411991 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb293cb75_0655_49e5_811c_14da8b769d26.slice/crio-742823d0de8e1f30ae5013f916a7244db7f034f43cc7b18a33bf20c305f82790 WatchSource:0}: Error finding container 742823d0de8e1f30ae5013f916a7244db7f034f43cc7b18a33bf20c305f82790: Status 404 returned error can't find the container with id 742823d0de8e1f30ae5013f916a7244db7f034f43cc7b18a33bf20c305f82790 Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.415084 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:43 crc kubenswrapper[4761]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:43 crc kubenswrapper[4761]: set -uo pipefail Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 07 07:50:43 crc kubenswrapper[4761]: HOSTS_FILE="/etc/hosts" Mar 07 07:50:43 crc kubenswrapper[4761]: TEMP_FILE="/etc/hosts.tmp" Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: # Make a temporary file with the old hosts file's attributes. Mar 07 07:50:43 crc kubenswrapper[4761]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 07 07:50:43 crc kubenswrapper[4761]: echo "Failed to preserve hosts file. Exiting." Mar 07 07:50:43 crc kubenswrapper[4761]: exit 1 Mar 07 07:50:43 crc kubenswrapper[4761]: fi Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: while true; do Mar 07 07:50:43 crc kubenswrapper[4761]: declare -A svc_ips Mar 07 07:50:43 crc kubenswrapper[4761]: for svc in "${services[@]}"; do Mar 07 07:50:43 crc kubenswrapper[4761]: # Fetch service IP from cluster dns if present. We make several tries Mar 07 07:50:43 crc kubenswrapper[4761]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 07 07:50:43 crc kubenswrapper[4761]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 07 07:50:43 crc kubenswrapper[4761]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 07 07:50:43 crc kubenswrapper[4761]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:43 crc kubenswrapper[4761]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:43 crc kubenswrapper[4761]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:43 crc kubenswrapper[4761]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 07 07:50:43 crc kubenswrapper[4761]: for i in ${!cmds[*]} Mar 07 07:50:43 crc kubenswrapper[4761]: do Mar 07 07:50:43 crc kubenswrapper[4761]: ips=($(eval "${cmds[i]}")) Mar 07 07:50:43 crc kubenswrapper[4761]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 07 07:50:43 crc kubenswrapper[4761]: svc_ips["${svc}"]="${ips[@]}" Mar 07 07:50:43 crc kubenswrapper[4761]: break Mar 07 07:50:43 crc kubenswrapper[4761]: fi Mar 07 07:50:43 crc kubenswrapper[4761]: done Mar 07 07:50:43 crc kubenswrapper[4761]: done Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: # Update /etc/hosts only if we get valid service IPs Mar 07 07:50:43 crc kubenswrapper[4761]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 07 07:50:43 crc kubenswrapper[4761]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 07 07:50:43 crc kubenswrapper[4761]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 07 07:50:43 crc kubenswrapper[4761]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 07 07:50:43 crc kubenswrapper[4761]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 07 07:50:43 crc kubenswrapper[4761]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 07 07:50:43 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:43 crc kubenswrapper[4761]: continue Mar 07 07:50:43 crc kubenswrapper[4761]: fi Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: # Append resolver entries for services Mar 07 07:50:43 crc kubenswrapper[4761]: rc=0 Mar 07 07:50:43 crc kubenswrapper[4761]: for svc in "${!svc_ips[@]}"; do Mar 07 07:50:43 crc kubenswrapper[4761]: for ip in ${svc_ips[${svc}]}; do Mar 07 07:50:43 crc kubenswrapper[4761]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 07 07:50:43 crc kubenswrapper[4761]: done Mar 07 07:50:43 crc kubenswrapper[4761]: done Mar 07 07:50:43 crc kubenswrapper[4761]: if [[ $rc -ne 0 ]]; then Mar 07 07:50:43 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:43 crc kubenswrapper[4761]: continue Mar 07 07:50:43 crc kubenswrapper[4761]: fi Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 07 07:50:43 crc kubenswrapper[4761]: # Replace /etc/hosts with our modified version if needed Mar 07 07:50:43 crc kubenswrapper[4761]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 07 07:50:43 crc kubenswrapper[4761]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 07 07:50:43 crc kubenswrapper[4761]: fi Mar 07 07:50:43 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:43 crc kubenswrapper[4761]: unset svc_ips Mar 07 07:50:43 crc kubenswrapper[4761]: done Mar 07 07:50:43 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t7fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bfzp8_openshift-dns(b293cb75-0655-49e5-811c-14da8b769d26): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:43 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.416355 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bfzp8" podUID="b293cb75-0655-49e5-811c-14da8b769d26" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.441345 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-dvcw9"] Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.441936 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-d7fhg"] Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.442295 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-p8mn8"] Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.443073 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.443486 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.444196 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.446275 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.447083 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.447540 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.448451 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.448625 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.448860 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.448990 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.449021 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.449049 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.449125 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.449340 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.449614 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.469507 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.485372 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.489765 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.489808 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.489820 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.489837 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.489849 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.497662 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.514102 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516543 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-etc-kubernetes\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516613 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-cnibin\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516670 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-hostroot\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516783 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-socket-dir-parent\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516841 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/66842cd2-650d-4f30-b620-d0b0e40d8f46-cni-binary-copy\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516891 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-k8s-cni-cncf-io\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516942 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4f2ca598-c5ae-4f45-bb7a-812b75562203-rootfs\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516980 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j77hs\" (UniqueName: \"kubernetes.io/projected/66842cd2-650d-4f30-b620-d0b0e40d8f46-kube-api-access-j77hs\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517016 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4f2ca598-c5ae-4f45-bb7a-812b75562203-mcd-auth-proxy-config\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517055 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-netns\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517086 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e012dce7-a788-4dab-b758-5ace07b2c150-multus-daemon-config\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517118 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rq94\" (UniqueName: \"kubernetes.io/projected/4f2ca598-c5ae-4f45-bb7a-812b75562203-kube-api-access-9rq94\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517152 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-cnibin\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517187 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-cni-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517222 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-kubelet\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517254 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-system-cni-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517284 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-conf-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517332 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e012dce7-a788-4dab-b758-5ace07b2c150-cni-binary-copy\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517366 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517400 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-system-cni-dir\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517431 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4f2ca598-c5ae-4f45-bb7a-812b75562203-proxy-tls\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517476 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-cni-bin\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517513 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-multus-certs\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517546 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j7cv\" (UniqueName: \"kubernetes.io/projected/e012dce7-a788-4dab-b758-5ace07b2c150-kube-api-access-8j7cv\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517578 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-os-release\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517613 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/66842cd2-650d-4f30-b620-d0b0e40d8f46-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517647 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-os-release\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517681 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-cni-multus\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.531394 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.546269 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.561655 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.577773 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.592167 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.592239 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.592265 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.592294 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.592312 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.605517 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618586 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4f2ca598-c5ae-4f45-bb7a-812b75562203-rootfs\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618646 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j77hs\" (UniqueName: \"kubernetes.io/projected/66842cd2-650d-4f30-b620-d0b0e40d8f46-kube-api-access-j77hs\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618683 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4f2ca598-c5ae-4f45-bb7a-812b75562203-mcd-auth-proxy-config\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618753 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-cnibin\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618780 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4f2ca598-c5ae-4f45-bb7a-812b75562203-rootfs\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618789 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-netns\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618836 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-netns\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618903 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-cnibin\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618878 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619251 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e012dce7-a788-4dab-b758-5ace07b2c150-multus-daemon-config\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619332 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rq94\" (UniqueName: \"kubernetes.io/projected/4f2ca598-c5ae-4f45-bb7a-812b75562203-kube-api-access-9rq94\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619358 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-cni-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619583 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-kubelet\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619606 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-cni-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619616 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-system-cni-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619659 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-system-cni-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619677 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-conf-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619695 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-kubelet\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619754 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-conf-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619762 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e012dce7-a788-4dab-b758-5ace07b2c150-cni-binary-copy\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619797 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619849 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-cni-bin\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619883 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-multus-certs\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619915 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j7cv\" (UniqueName: \"kubernetes.io/projected/e012dce7-a788-4dab-b758-5ace07b2c150-kube-api-access-8j7cv\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619949 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-system-cni-dir\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619981 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4f2ca598-c5ae-4f45-bb7a-812b75562203-proxy-tls\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620017 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-os-release\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620050 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-cni-multus\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620093 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-os-release\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620140 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/66842cd2-650d-4f30-b620-d0b0e40d8f46-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620209 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-etc-kubernetes\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620252 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-cnibin\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620263 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4f2ca598-c5ae-4f45-bb7a-812b75562203-mcd-auth-proxy-config\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620342 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-socket-dir-parent\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620385 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-os-release\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620391 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-etc-kubernetes\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620425 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-cni-multus\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620399 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-hostroot\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620442 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-hostroot\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620458 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-cnibin\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620490 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-multus-certs\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620504 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/66842cd2-650d-4f30-b620-d0b0e40d8f46-cni-binary-copy\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620524 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-socket-dir-parent\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-k8s-cni-cncf-io\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620766 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-k8s-cni-cncf-io\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620766 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-system-cni-dir\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620865 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-cni-bin\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620579 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-os-release\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.621130 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e012dce7-a788-4dab-b758-5ace07b2c150-cni-binary-copy\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.621243 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.621864 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/66842cd2-650d-4f30-b620-d0b0e40d8f46-cni-binary-copy\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.622138 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e012dce7-a788-4dab-b758-5ace07b2c150-multus-daemon-config\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.622848 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/66842cd2-650d-4f30-b620-d0b0e40d8f46-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.635137 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.635460 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4f2ca598-c5ae-4f45-bb7a-812b75562203-proxy-tls\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.640317 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rq94\" (UniqueName: \"kubernetes.io/projected/4f2ca598-c5ae-4f45-bb7a-812b75562203-kube-api-access-9rq94\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.643426 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j7cv\" (UniqueName: \"kubernetes.io/projected/e012dce7-a788-4dab-b758-5ace07b2c150-kube-api-access-8j7cv\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.646431 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.649340 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j77hs\" (UniqueName: \"kubernetes.io/projected/66842cd2-650d-4f30-b620-d0b0e40d8f46-kube-api-access-j77hs\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.667498 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.675647 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.684015 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.693941 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.694664 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.694808 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.694848 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.694882 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.694905 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.704353 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.704613 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.704662 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.704781 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.704916 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.705305 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.705322 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.706433 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:43 crc kubenswrapper[4761]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:43 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:43 crc kubenswrapper[4761]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 07 07:50:43 crc kubenswrapper[4761]: source /etc/kubernetes/apiserver-url.env Mar 07 07:50:43 crc kubenswrapper[4761]: else Mar 07 07:50:43 crc kubenswrapper[4761]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 07 07:50:43 crc kubenswrapper[4761]: exit 1 Mar 07 07:50:43 crc kubenswrapper[4761]: fi Mar 07 07:50:43 crc kubenswrapper[4761]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 07 07:50:43 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:43 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.707772 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.711117 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.724860 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.734492 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.743780 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.753236 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.763059 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.767325 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.775976 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.778639 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.785827 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.787589 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rq94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.797532 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rq94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.797781 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.797806 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.797870 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.797888 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.797900 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: W0307 07:50:43.798075 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66842cd2_650d_4f30_b620_d0b0e40d8f46.slice/crio-380969f4ca8a6785c17817e393da2510f8871158a23bd6b2b10b60bff430d40f WatchSource:0}: Error finding container 380969f4ca8a6785c17817e393da2510f8871158a23bd6b2b10b60bff430d40f: Status 404 returned error can't find the container with id 380969f4ca8a6785c17817e393da2510f8871158a23bd6b2b10b60bff430d40f Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.799542 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.799914 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.804115 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j77hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-p8mn8_openshift-multus(66842cd2-650d-4f30-b620-d0b0e40d8f46): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:43 crc kubenswrapper[4761]: W0307 07:50:43.804211 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode012dce7_a788_4dab_b758_5ace07b2c150.slice/crio-00c033f43eee072493749db1ad7381e99d16ec682a37608ac8c488ebb68e084a WatchSource:0}: Error finding container 00c033f43eee072493749db1ad7381e99d16ec682a37608ac8c488ebb68e084a: Status 404 returned error can't find the container with id 00c033f43eee072493749db1ad7381e99d16ec682a37608ac8c488ebb68e084a Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.805230 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" podUID="66842cd2-650d-4f30-b620-d0b0e40d8f46" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.808025 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9zpnq"] Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.808925 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.812582 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.813022 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.813234 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.813440 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.813647 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.814868 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.815036 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.815678 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.817183 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:43 crc kubenswrapper[4761]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 07 07:50:43 crc kubenswrapper[4761]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 07 07:50:43 crc kubenswrapper[4761]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8j7cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-d7fhg_openshift-multus(e012dce7-a788-4dab-b758-5ace07b2c150): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:43 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.818396 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-d7fhg" podUID="e012dce7-a788-4dab-b758-5ace07b2c150" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.826296 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.843206 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.850080 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.859620 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.868421 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.879013 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.889288 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.900379 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.901134 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.901179 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.901194 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.901214 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.901226 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.912038 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.922011 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923474 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-slash\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923669 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-ovn-kubernetes\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923705 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-env-overrides\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923753 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-node-log\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923800 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-bin\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923822 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-netd\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923868 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-netns\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923888 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-etc-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923912 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-systemd-units\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923934 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-config\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924007 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-systemd\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924035 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-var-lib-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924102 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-kubelet\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924158 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-script-lib\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924360 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19ab486f-60a2-4522-a589-79b4c4375e53-ovn-node-metrics-cert\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924488 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5l7k\" (UniqueName: \"kubernetes.io/projected/19ab486f-60a2-4522-a589-79b4c4375e53-kube-api-access-n5l7k\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924706 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-log-socket\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924785 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924847 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-ovn\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924922 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.949003 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.979429 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.991837 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.004280 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.004445 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.004487 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.004506 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.004532 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.004555 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.018504 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026010 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026059 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-ovn\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026090 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026131 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-slash\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026160 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-ovn-kubernetes\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026189 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-env-overrides\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026218 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-node-log\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026246 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-bin\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026273 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-netd\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026312 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-netns\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026356 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-etc-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026385 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-systemd-units\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026427 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-config\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026455 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-systemd\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026510 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-var-lib-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026553 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-kubelet\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026583 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-script-lib\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026610 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19ab486f-60a2-4522-a589-79b4c4375e53-ovn-node-metrics-cert\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026638 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5l7k\" (UniqueName: \"kubernetes.io/projected/19ab486f-60a2-4522-a589-79b4c4375e53-kube-api-access-n5l7k\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026679 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-log-socket\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026781 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-log-socket\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026837 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026876 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-ovn\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026916 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026956 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-slash\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026994 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-ovn-kubernetes\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.027678 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-netd\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.027749 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-kubelet\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.027768 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-node-log\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.027760 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-env-overrides\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.027987 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-var-lib-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.027994 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-systemd-units\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.028030 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-bin\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.028061 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-netns\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.028095 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-etc-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.028129 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-systemd\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.028900 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-config\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.029927 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-script-lib\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.032513 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.033105 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19ab486f-60a2-4522-a589-79b4c4375e53-ovn-node-metrics-cert\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.043989 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.045172 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5l7k\" (UniqueName: \"kubernetes.io/projected/19ab486f-60a2-4522-a589-79b4c4375e53-kube-api-access-n5l7k\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.061573 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.082973 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.098987 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.107764 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.107824 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.107848 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.107880 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.107903 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.113380 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.128568 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.129164 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: W0307 07:50:44.150496 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19ab486f_60a2_4522_a589_79b4c4375e53.slice/crio-75ce9a667bf5bdb687aaa63e45963644ed7766516d86b9aab3ce1f1bcd7454dd WatchSource:0}: Error finding container 75ce9a667bf5bdb687aaa63e45963644ed7766516d86b9aab3ce1f1bcd7454dd: Status 404 returned error can't find the container with id 75ce9a667bf5bdb687aaa63e45963644ed7766516d86b9aab3ce1f1bcd7454dd Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.153584 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:44 crc kubenswrapper[4761]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 07 07:50:44 crc kubenswrapper[4761]: apiVersion: v1 Mar 07 07:50:44 crc kubenswrapper[4761]: clusters: Mar 07 07:50:44 crc kubenswrapper[4761]: - cluster: Mar 07 07:50:44 crc kubenswrapper[4761]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 07 07:50:44 crc kubenswrapper[4761]: server: https://api-int.crc.testing:6443 Mar 07 07:50:44 crc kubenswrapper[4761]: name: default-cluster Mar 07 07:50:44 crc kubenswrapper[4761]: contexts: Mar 07 07:50:44 crc kubenswrapper[4761]: - context: Mar 07 07:50:44 crc kubenswrapper[4761]: cluster: default-cluster Mar 07 07:50:44 crc kubenswrapper[4761]: namespace: default Mar 07 07:50:44 crc kubenswrapper[4761]: user: default-auth Mar 07 07:50:44 crc kubenswrapper[4761]: name: default-context Mar 07 07:50:44 crc kubenswrapper[4761]: current-context: default-context Mar 07 07:50:44 crc kubenswrapper[4761]: kind: Config Mar 07 07:50:44 crc kubenswrapper[4761]: preferences: {} Mar 07 07:50:44 crc kubenswrapper[4761]: users: Mar 07 07:50:44 crc kubenswrapper[4761]: - name: default-auth Mar 07 07:50:44 crc kubenswrapper[4761]: user: Mar 07 07:50:44 crc kubenswrapper[4761]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:50:44 crc kubenswrapper[4761]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:50:44 crc kubenswrapper[4761]: EOF Mar 07 07:50:44 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5l7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-9zpnq_openshift-ovn-kubernetes(19ab486f-60a2-4522-a589-79b4c4375e53): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:44 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.154687 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.216775 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.216812 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.216822 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.216835 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.216846 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.303815 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"75ce9a667bf5bdb687aaa63e45963644ed7766516d86b9aab3ce1f1bcd7454dd"} Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.307344 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:44 crc kubenswrapper[4761]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 07 07:50:44 crc kubenswrapper[4761]: apiVersion: v1 Mar 07 07:50:44 crc kubenswrapper[4761]: clusters: Mar 07 07:50:44 crc kubenswrapper[4761]: - cluster: Mar 07 07:50:44 crc kubenswrapper[4761]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 07 07:50:44 crc kubenswrapper[4761]: server: https://api-int.crc.testing:6443 Mar 07 07:50:44 crc kubenswrapper[4761]: name: default-cluster Mar 07 07:50:44 crc kubenswrapper[4761]: contexts: Mar 07 07:50:44 crc kubenswrapper[4761]: - context: Mar 07 07:50:44 crc kubenswrapper[4761]: cluster: default-cluster Mar 07 07:50:44 crc kubenswrapper[4761]: namespace: default Mar 07 07:50:44 crc kubenswrapper[4761]: user: default-auth Mar 07 07:50:44 crc kubenswrapper[4761]: name: default-context Mar 07 07:50:44 crc kubenswrapper[4761]: current-context: default-context Mar 07 07:50:44 crc kubenswrapper[4761]: kind: Config Mar 07 07:50:44 crc kubenswrapper[4761]: preferences: {} Mar 07 07:50:44 crc kubenswrapper[4761]: users: Mar 07 07:50:44 crc kubenswrapper[4761]: - name: default-auth Mar 07 07:50:44 crc kubenswrapper[4761]: user: Mar 07 07:50:44 crc kubenswrapper[4761]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:50:44 crc kubenswrapper[4761]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:50:44 crc kubenswrapper[4761]: EOF Mar 07 07:50:44 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5l7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-9zpnq_openshift-ovn-kubernetes(19ab486f-60a2-4522-a589-79b4c4375e53): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:44 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.307418 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerStarted","Data":"380969f4ca8a6785c17817e393da2510f8871158a23bd6b2b10b60bff430d40f"} Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.308664 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.310314 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j77hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-p8mn8_openshift-multus(66842cd2-650d-4f30-b620-d0b0e40d8f46): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.312957 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" podUID="66842cd2-650d-4f30-b620-d0b0e40d8f46" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.313122 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"d9697fd62b30dedf15c538584a0d24a015153976c9773222aeb3bcb76d25e217"} Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.315195 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rq94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.317424 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bfzp8" event={"ID":"b293cb75-0655-49e5-811c-14da8b769d26","Type":"ContainerStarted","Data":"742823d0de8e1f30ae5013f916a7244db7f034f43cc7b18a33bf20c305f82790"} Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.318248 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rq94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.319155 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:44 crc kubenswrapper[4761]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:44 crc kubenswrapper[4761]: set -uo pipefail Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 07 07:50:44 crc kubenswrapper[4761]: HOSTS_FILE="/etc/hosts" Mar 07 07:50:44 crc kubenswrapper[4761]: TEMP_FILE="/etc/hosts.tmp" Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: # Make a temporary file with the old hosts file's attributes. Mar 07 07:50:44 crc kubenswrapper[4761]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 07 07:50:44 crc kubenswrapper[4761]: echo "Failed to preserve hosts file. Exiting." Mar 07 07:50:44 crc kubenswrapper[4761]: exit 1 Mar 07 07:50:44 crc kubenswrapper[4761]: fi Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: while true; do Mar 07 07:50:44 crc kubenswrapper[4761]: declare -A svc_ips Mar 07 07:50:44 crc kubenswrapper[4761]: for svc in "${services[@]}"; do Mar 07 07:50:44 crc kubenswrapper[4761]: # Fetch service IP from cluster dns if present. We make several tries Mar 07 07:50:44 crc kubenswrapper[4761]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 07 07:50:44 crc kubenswrapper[4761]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 07 07:50:44 crc kubenswrapper[4761]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 07 07:50:44 crc kubenswrapper[4761]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:44 crc kubenswrapper[4761]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:44 crc kubenswrapper[4761]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:44 crc kubenswrapper[4761]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 07 07:50:44 crc kubenswrapper[4761]: for i in ${!cmds[*]} Mar 07 07:50:44 crc kubenswrapper[4761]: do Mar 07 07:50:44 crc kubenswrapper[4761]: ips=($(eval "${cmds[i]}")) Mar 07 07:50:44 crc kubenswrapper[4761]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 07 07:50:44 crc kubenswrapper[4761]: svc_ips["${svc}"]="${ips[@]}" Mar 07 07:50:44 crc kubenswrapper[4761]: break Mar 07 07:50:44 crc kubenswrapper[4761]: fi Mar 07 07:50:44 crc kubenswrapper[4761]: done Mar 07 07:50:44 crc kubenswrapper[4761]: done Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: # Update /etc/hosts only if we get valid service IPs Mar 07 07:50:44 crc kubenswrapper[4761]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 07 07:50:44 crc kubenswrapper[4761]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 07 07:50:44 crc kubenswrapper[4761]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 07 07:50:44 crc kubenswrapper[4761]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 07 07:50:44 crc kubenswrapper[4761]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 07 07:50:44 crc kubenswrapper[4761]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 07 07:50:44 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:44 crc kubenswrapper[4761]: continue Mar 07 07:50:44 crc kubenswrapper[4761]: fi Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: # Append resolver entries for services Mar 07 07:50:44 crc kubenswrapper[4761]: rc=0 Mar 07 07:50:44 crc kubenswrapper[4761]: for svc in "${!svc_ips[@]}"; do Mar 07 07:50:44 crc kubenswrapper[4761]: for ip in ${svc_ips[${svc}]}; do Mar 07 07:50:44 crc kubenswrapper[4761]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 07 07:50:44 crc kubenswrapper[4761]: done Mar 07 07:50:44 crc kubenswrapper[4761]: done Mar 07 07:50:44 crc kubenswrapper[4761]: if [[ $rc -ne 0 ]]; then Mar 07 07:50:44 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:44 crc kubenswrapper[4761]: continue Mar 07 07:50:44 crc kubenswrapper[4761]: fi Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 07 07:50:44 crc kubenswrapper[4761]: # Replace /etc/hosts with our modified version if needed Mar 07 07:50:44 crc kubenswrapper[4761]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 07 07:50:44 crc kubenswrapper[4761]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 07 07:50:44 crc kubenswrapper[4761]: fi Mar 07 07:50:44 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:44 crc kubenswrapper[4761]: unset svc_ips Mar 07 07:50:44 crc kubenswrapper[4761]: done Mar 07 07:50:44 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t7fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bfzp8_openshift-dns(b293cb75-0655-49e5-811c-14da8b769d26): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:44 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.319301 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.319362 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.319422 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.319454 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.319480 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.319141 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7fhg" event={"ID":"e012dce7-a788-4dab-b758-5ace07b2c150","Type":"ContainerStarted","Data":"00c033f43eee072493749db1ad7381e99d16ec682a37608ac8c488ebb68e084a"} Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.319677 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.320366 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bfzp8" podUID="b293cb75-0655-49e5-811c-14da8b769d26" Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.323786 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:44 crc kubenswrapper[4761]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 07 07:50:44 crc kubenswrapper[4761]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 07 07:50:44 crc kubenswrapper[4761]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8j7cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-d7fhg_openshift-multus(e012dce7-a788-4dab-b758-5ace07b2c150): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:44 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.324668 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.324947 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-d7fhg" podUID="e012dce7-a788-4dab-b758-5ace07b2c150" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.339069 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.350370 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.368953 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.385637 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.395963 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.406143 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.417986 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.421280 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.421375 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.421392 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.421410 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.421421 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.437200 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.448420 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.462379 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.477115 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.490675 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.509556 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.524331 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.524398 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.524417 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.524440 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.524458 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.524743 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.539597 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.585966 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.615092 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.628019 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.628086 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.628108 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.628138 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.628585 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.652247 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.700097 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.732797 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.732865 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.732883 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.732910 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.732929 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.740044 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.777808 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.813574 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.835505 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.835537 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.835547 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.835561 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.835572 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.854552 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.914120 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.932451 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.937891 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.937925 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.937935 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.937950 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.937963 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.040643 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.040706 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.040769 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.040804 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.040827 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.143637 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.143699 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.143754 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.143785 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.143808 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.246270 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.246332 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.246355 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.246385 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.246407 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.349455 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.349547 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.349572 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.349601 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.349622 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.451903 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.451956 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.451972 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.451995 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.452013 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.554301 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.554374 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.554397 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.554425 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.554445 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.657060 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.657099 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.657111 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.657127 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.657139 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.705596 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.705667 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.705827 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:45 crc kubenswrapper[4761]: E0307 07:50:45.706017 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:45 crc kubenswrapper[4761]: E0307 07:50:45.706486 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:45 crc kubenswrapper[4761]: E0307 07:50:45.706599 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:45 crc kubenswrapper[4761]: E0307 07:50:45.708007 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:45 crc kubenswrapper[4761]: E0307 07:50:45.709092 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.759610 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.759704 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.759771 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.759803 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.759824 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.863098 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.863145 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.863153 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.863168 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.863180 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.966242 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.966283 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.966292 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.966308 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.966318 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.051421 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.051635 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:02.051600691 +0000 UTC m=+118.960767206 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.051755 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.051843 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.051985 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.051993 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.052102 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:02.052072083 +0000 UTC m=+118.961238598 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.052140 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:02.052127064 +0000 UTC m=+118.961293579 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.069860 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.069926 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.069947 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.069974 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.069992 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.153229 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.153335 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153466 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153512 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153559 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153517 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153581 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153598 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153668 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:02.153644668 +0000 UTC m=+119.062811183 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153697 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:02.153683689 +0000 UTC m=+119.062850204 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.172814 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.172870 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.172889 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.172915 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.172933 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.275786 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.275838 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.275860 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.275886 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.275903 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.378948 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.378999 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.379016 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.379060 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.379082 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.481956 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.482011 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.482028 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.482049 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.482064 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.584770 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.584815 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.584826 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.584846 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.584858 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.688348 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.688414 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.688437 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.688469 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.688492 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.790669 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.790703 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.790732 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.790746 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.790754 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.893137 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.893181 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.893192 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.893208 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.893220 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.995861 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.995913 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.995928 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.995950 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.995966 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.099322 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.099399 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.099425 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.099456 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.099485 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.202029 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.202375 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.202535 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.202702 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.202925 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.306703 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.306795 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.306815 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.306842 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.306860 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.409891 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.410379 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.410564 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.410771 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.410936 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.514683 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.514774 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.514793 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.514816 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.514834 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.618167 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.618221 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.618232 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.618249 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.618258 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.705416 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.705485 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:47 crc kubenswrapper[4761]: E0307 07:50:47.705569 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.705586 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:47 crc kubenswrapper[4761]: E0307 07:50:47.705702 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:47 crc kubenswrapper[4761]: E0307 07:50:47.705843 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.722161 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.722251 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.722282 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.722316 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.722353 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.826352 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.826417 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.826434 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.826457 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.826474 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.930507 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.930557 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.930569 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.930589 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.930602 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.033317 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.033364 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.033375 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.033390 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.033401 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.136256 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.136323 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.136335 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.136670 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.136698 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.239317 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.239373 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.239390 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.239411 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.239428 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.341999 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.342050 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.342067 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.342089 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.342111 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.445183 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.445249 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.445266 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.445291 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.445308 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.548213 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.548267 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.548279 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.548298 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.548310 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.651303 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.651706 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.652002 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.652236 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.652447 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.706263 4761 scope.go:117] "RemoveContainer" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.755488 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.755558 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.755583 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.755613 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.755636 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.858773 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.858805 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.858814 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.858830 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.858840 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.961784 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.961817 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.961829 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.961844 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.961853 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.064386 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.064418 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.064426 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.064438 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.064447 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.167073 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.167119 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.167130 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.167150 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.167169 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.271890 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.271941 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.271958 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.271981 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.272000 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.335947 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.337637 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.338135 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.353492 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.364391 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.374396 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.374417 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.374560 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.374572 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.374589 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.374601 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.382417 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.400939 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.416351 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.426523 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.435416 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.449151 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.464187 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.476638 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.476684 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.476695 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.476717 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.476751 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.476919 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.492161 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.510897 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.580049 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.580108 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.580124 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.580147 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.580164 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.646940 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-tbbjn"] Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.649205 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.651400 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.651548 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.652328 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.655415 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.677778 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.682055 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.682099 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.682114 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.682136 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.682150 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.690143 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.703133 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.705356 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.705384 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:49 crc kubenswrapper[4761]: E0307 07:50:49.705503 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.705537 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:49 crc kubenswrapper[4761]: E0307 07:50:49.705620 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:49 crc kubenswrapper[4761]: E0307 07:50:49.705702 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.725380 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.735380 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.752257 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.763568 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.772520 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.783426 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.784925 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.784992 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.785011 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.785038 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.785059 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.798976 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.800318 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae31fe3-35c2-49ba-a314-78ade009741c-host\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.800367 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgsp4\" (UniqueName: \"kubernetes.io/projected/bae31fe3-35c2-49ba-a314-78ade009741c-kube-api-access-tgsp4\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.800434 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bae31fe3-35c2-49ba-a314-78ade009741c-serviceca\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.809372 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.818570 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.827299 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.834977 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.888156 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.888196 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.888206 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.888219 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.888229 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.902103 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae31fe3-35c2-49ba-a314-78ade009741c-host\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.902165 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgsp4\" (UniqueName: \"kubernetes.io/projected/bae31fe3-35c2-49ba-a314-78ade009741c-kube-api-access-tgsp4\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.902226 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae31fe3-35c2-49ba-a314-78ade009741c-host\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.902284 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bae31fe3-35c2-49ba-a314-78ade009741c-serviceca\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.903277 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bae31fe3-35c2-49ba-a314-78ade009741c-serviceca\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.922277 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgsp4\" (UniqueName: \"kubernetes.io/projected/bae31fe3-35c2-49ba-a314-78ade009741c-kube-api-access-tgsp4\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.968649 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: W0307 07:50:49.981546 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbae31fe3_35c2_49ba_a314_78ade009741c.slice/crio-1af3cbb15f8c68ac1faf09102390d5c59d427700fb79408452bad534fc0bd048 WatchSource:0}: Error finding container 1af3cbb15f8c68ac1faf09102390d5c59d427700fb79408452bad534fc0bd048: Status 404 returned error can't find the container with id 1af3cbb15f8c68ac1faf09102390d5c59d427700fb79408452bad534fc0bd048 Mar 07 07:50:49 crc kubenswrapper[4761]: E0307 07:50:49.984695 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:49 crc kubenswrapper[4761]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 07 07:50:49 crc kubenswrapper[4761]: while [ true ]; Mar 07 07:50:49 crc kubenswrapper[4761]: do Mar 07 07:50:49 crc kubenswrapper[4761]: for f in $(ls /tmp/serviceca); do Mar 07 07:50:49 crc kubenswrapper[4761]: echo $f Mar 07 07:50:49 crc kubenswrapper[4761]: ca_file_path="/tmp/serviceca/${f}" Mar 07 07:50:49 crc kubenswrapper[4761]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 07 07:50:49 crc kubenswrapper[4761]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 07 07:50:49 crc kubenswrapper[4761]: if [ -e "${reg_dir_path}" ]; then Mar 07 07:50:49 crc kubenswrapper[4761]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 07 07:50:49 crc kubenswrapper[4761]: else Mar 07 07:50:49 crc kubenswrapper[4761]: mkdir $reg_dir_path Mar 07 07:50:49 crc kubenswrapper[4761]: cp $ca_file_path $reg_dir_path/ca.crt Mar 07 07:50:49 crc kubenswrapper[4761]: fi Mar 07 07:50:49 crc kubenswrapper[4761]: done Mar 07 07:50:49 crc kubenswrapper[4761]: for d in $(ls /etc/docker/certs.d); do Mar 07 07:50:49 crc kubenswrapper[4761]: echo $d Mar 07 07:50:49 crc kubenswrapper[4761]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 07 07:50:49 crc kubenswrapper[4761]: reg_conf_path="/tmp/serviceca/${dp}" Mar 07 07:50:49 crc kubenswrapper[4761]: if [ ! -e "${reg_conf_path}" ]; then Mar 07 07:50:49 crc kubenswrapper[4761]: rm -rf /etc/docker/certs.d/$d Mar 07 07:50:49 crc kubenswrapper[4761]: fi Mar 07 07:50:49 crc kubenswrapper[4761]: done Mar 07 07:50:49 crc kubenswrapper[4761]: sleep 60 & wait ${!} Mar 07 07:50:49 crc kubenswrapper[4761]: done Mar 07 07:50:49 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgsp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-tbbjn_openshift-image-registry(bae31fe3-35c2-49ba-a314-78ade009741c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:49 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:49 crc kubenswrapper[4761]: E0307 07:50:49.985919 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-tbbjn" podUID="bae31fe3-35c2-49ba-a314-78ade009741c" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.991406 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.991471 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.991485 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.991502 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.991515 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.094828 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.094890 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.094919 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.094944 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.094961 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.198324 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.198380 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.198397 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.198421 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.198438 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.300372 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.300416 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.300430 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.300451 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.300466 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.341580 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tbbjn" event={"ID":"bae31fe3-35c2-49ba-a314-78ade009741c","Type":"ContainerStarted","Data":"1af3cbb15f8c68ac1faf09102390d5c59d427700fb79408452bad534fc0bd048"} Mar 07 07:50:50 crc kubenswrapper[4761]: E0307 07:50:50.343069 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:50 crc kubenswrapper[4761]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 07 07:50:50 crc kubenswrapper[4761]: while [ true ]; Mar 07 07:50:50 crc kubenswrapper[4761]: do Mar 07 07:50:50 crc kubenswrapper[4761]: for f in $(ls /tmp/serviceca); do Mar 07 07:50:50 crc kubenswrapper[4761]: echo $f Mar 07 07:50:50 crc kubenswrapper[4761]: ca_file_path="/tmp/serviceca/${f}" Mar 07 07:50:50 crc kubenswrapper[4761]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 07 07:50:50 crc kubenswrapper[4761]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 07 07:50:50 crc kubenswrapper[4761]: if [ -e "${reg_dir_path}" ]; then Mar 07 07:50:50 crc kubenswrapper[4761]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 07 07:50:50 crc kubenswrapper[4761]: else Mar 07 07:50:50 crc kubenswrapper[4761]: mkdir $reg_dir_path Mar 07 07:50:50 crc kubenswrapper[4761]: cp $ca_file_path $reg_dir_path/ca.crt Mar 07 07:50:50 crc kubenswrapper[4761]: fi Mar 07 07:50:50 crc kubenswrapper[4761]: done Mar 07 07:50:50 crc kubenswrapper[4761]: for d in $(ls /etc/docker/certs.d); do Mar 07 07:50:50 crc kubenswrapper[4761]: echo $d Mar 07 07:50:50 crc kubenswrapper[4761]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 07 07:50:50 crc kubenswrapper[4761]: reg_conf_path="/tmp/serviceca/${dp}" Mar 07 07:50:50 crc kubenswrapper[4761]: if [ ! -e "${reg_conf_path}" ]; then Mar 07 07:50:50 crc kubenswrapper[4761]: rm -rf /etc/docker/certs.d/$d Mar 07 07:50:50 crc kubenswrapper[4761]: fi Mar 07 07:50:50 crc kubenswrapper[4761]: done Mar 07 07:50:50 crc kubenswrapper[4761]: sleep 60 & wait ${!} Mar 07 07:50:50 crc kubenswrapper[4761]: done Mar 07 07:50:50 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgsp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-tbbjn_openshift-image-registry(bae31fe3-35c2-49ba-a314-78ade009741c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:50 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:50 crc kubenswrapper[4761]: E0307 07:50:50.344589 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-tbbjn" podUID="bae31fe3-35c2-49ba-a314-78ade009741c" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.363255 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.378256 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.389184 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.399630 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.403026 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.403117 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.403134 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.403158 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.403177 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.411442 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.424583 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.439222 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.450077 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.466498 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.477903 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.495688 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.506084 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.506163 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.506179 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.506194 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.506205 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.509640 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.520413 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.535558 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.609279 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.609339 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.609356 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.609381 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.609397 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.712665 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.712766 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.712788 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.712815 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.712834 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.816360 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.816410 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.816427 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.816449 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.816466 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.919439 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.919495 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.919520 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.919569 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.919594 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.024635 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.024692 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.024708 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.024756 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.024772 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.127448 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.127486 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.127495 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.127509 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.127517 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.230142 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.230197 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.230216 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.230243 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.230265 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.333407 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.333492 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.333541 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.333567 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.333584 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.435871 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.435915 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.435928 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.435944 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.435955 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.538311 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.538367 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.538381 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.538400 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.538414 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.640314 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.640349 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.640360 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.640374 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.640385 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.705476 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.705597 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.705873 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.705943 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.705996 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.706120 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.739165 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.739236 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.739264 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.739297 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.739320 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.749501 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.753783 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.753848 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.753865 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.753888 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.753906 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.769292 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.772738 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.772764 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.772774 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.772791 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.772803 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.780843 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.784102 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.784147 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.784160 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.784179 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.784190 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.795138 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.800285 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.800325 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.800334 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.800348 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.800356 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.813630 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.813808 4761 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.816019 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.816055 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.816068 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.816083 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.816094 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.919470 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.919540 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.919562 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.919678 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.919749 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.022444 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.022498 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.022515 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.022538 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.022555 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.125214 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.125285 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.125319 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.125347 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.125367 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.227824 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.227891 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.227912 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.227941 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.228005 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.330887 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.330956 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.330974 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.330999 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.331015 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.434788 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.434844 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.434861 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.434884 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.434902 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.538062 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.538121 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.538141 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.538166 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.538191 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.642408 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.642461 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.642479 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.642502 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.642521 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: E0307 07:50:52.708019 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:52 crc kubenswrapper[4761]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:52 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:52 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:52 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:52 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:52 crc kubenswrapper[4761]: fi Mar 07 07:50:52 crc kubenswrapper[4761]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 07 07:50:52 crc kubenswrapper[4761]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 07 07:50:52 crc kubenswrapper[4761]: ho_enable="--enable-hybrid-overlay" Mar 07 07:50:52 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 07 07:50:52 crc kubenswrapper[4761]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 07 07:50:52 crc kubenswrapper[4761]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 07 07:50:52 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:52 crc kubenswrapper[4761]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 07 07:50:52 crc kubenswrapper[4761]: --webhook-host=127.0.0.1 \ Mar 07 07:50:52 crc kubenswrapper[4761]: --webhook-port=9743 \ Mar 07 07:50:52 crc kubenswrapper[4761]: ${ho_enable} \ Mar 07 07:50:52 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:50:52 crc kubenswrapper[4761]: --disable-approver \ Mar 07 07:50:52 crc kubenswrapper[4761]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 07 07:50:52 crc kubenswrapper[4761]: --wait-for-kubernetes-api=200s \ Mar 07 07:50:52 crc kubenswrapper[4761]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 07 07:50:52 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:52 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:52 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:52 crc kubenswrapper[4761]: E0307 07:50:52.711672 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:52 crc kubenswrapper[4761]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:52 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:52 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:52 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:52 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:52 crc kubenswrapper[4761]: fi Mar 07 07:50:52 crc kubenswrapper[4761]: Mar 07 07:50:52 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 07 07:50:52 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:52 crc kubenswrapper[4761]: --disable-webhook \ Mar 07 07:50:52 crc kubenswrapper[4761]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 07 07:50:52 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:52 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:52 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:52 crc kubenswrapper[4761]: E0307 07:50:52.713415 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.745232 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.745283 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.745296 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.745316 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.745334 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.848037 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.848089 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.848100 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.848120 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.848131 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.951125 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.951163 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.951171 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.951185 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.951196 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.053684 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.053807 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.053833 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.053865 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.053888 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.156919 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.156958 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.156987 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.157002 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.157011 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.259574 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.259656 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.259670 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.259686 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.259700 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.362244 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.362322 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.362348 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.362378 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.362401 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.465625 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.465688 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.465704 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.465750 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.465767 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.568774 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.568833 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.568850 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.568876 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.568895 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.671920 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.671965 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.671980 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.672000 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.672015 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.705298 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.705319 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.705386 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:53 crc kubenswrapper[4761]: E0307 07:50:53.705543 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:53 crc kubenswrapper[4761]: E0307 07:50:53.705691 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:53 crc kubenswrapper[4761]: E0307 07:50:53.705804 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.717288 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.730159 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.741459 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.751844 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.768901 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.774488 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.774521 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.774532 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.774548 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.774562 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.777369 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.788060 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.796074 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.803486 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.812152 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.824081 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.842898 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.851897 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.861207 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.025921 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.025970 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.025988 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.026013 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.026030 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.128170 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.128212 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.128222 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.128236 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.128247 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.229884 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.229924 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.229934 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.229948 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.229957 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.331498 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.331540 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.331550 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.331566 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.331575 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.434712 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.434815 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.434839 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.434867 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.434891 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.536960 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.537005 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.537017 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.537034 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.537047 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.639575 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.639615 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.639623 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.639636 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.639645 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: E0307 07:50:54.707885 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:54 crc kubenswrapper[4761]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:54 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:54 crc kubenswrapper[4761]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 07 07:50:54 crc kubenswrapper[4761]: source /etc/kubernetes/apiserver-url.env Mar 07 07:50:54 crc kubenswrapper[4761]: else Mar 07 07:50:54 crc kubenswrapper[4761]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 07 07:50:54 crc kubenswrapper[4761]: exit 1 Mar 07 07:50:54 crc kubenswrapper[4761]: fi Mar 07 07:50:54 crc kubenswrapper[4761]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 07 07:50:54 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:54 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:54 crc kubenswrapper[4761]: E0307 07:50:54.709187 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.742576 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.742631 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.742650 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.742672 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.742690 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.845143 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.845216 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.845248 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.845284 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.845305 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.948341 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.948428 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.948461 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.948493 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.948514 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.051081 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.051143 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.051160 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.051182 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.051200 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.153599 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.153666 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.153687 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.153741 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.153761 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.255576 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.255609 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.255619 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.255632 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.255640 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.357606 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.357649 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.357662 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.357679 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.357691 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.459955 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.460022 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.460045 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.460072 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.460095 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.488216 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62"] Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.489016 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.491204 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.492966 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.505285 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.516418 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.530927 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.539211 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9d2eccd-e600-437b-b36a-a3ed8e383128-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.539415 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9d2eccd-e600-437b-b36a-a3ed8e383128-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.539530 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9d2eccd-e600-437b-b36a-a3ed8e383128-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.539632 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcxbf\" (UniqueName: \"kubernetes.io/projected/c9d2eccd-e600-437b-b36a-a3ed8e383128-kube-api-access-jcxbf\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.539935 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.547745 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.561489 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.562939 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.562989 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.563006 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.563031 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.563050 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.569652 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.583873 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.606792 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.614751 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.622883 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.631750 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.640148 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9d2eccd-e600-437b-b36a-a3ed8e383128-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.640213 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9d2eccd-e600-437b-b36a-a3ed8e383128-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.640234 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcxbf\" (UniqueName: \"kubernetes.io/projected/c9d2eccd-e600-437b-b36a-a3ed8e383128-kube-api-access-jcxbf\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.640287 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9d2eccd-e600-437b-b36a-a3ed8e383128-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.640973 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9d2eccd-e600-437b-b36a-a3ed8e383128-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.641047 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9d2eccd-e600-437b-b36a-a3ed8e383128-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.643046 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.651996 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.653348 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9d2eccd-e600-437b-b36a-a3ed8e383128-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.660453 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcxbf\" (UniqueName: \"kubernetes.io/projected/c9d2eccd-e600-437b-b36a-a3ed8e383128-kube-api-access-jcxbf\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.665766 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.666099 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.666250 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.666264 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.666285 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.666300 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.705396 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.705492 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.705531 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.705694 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.705764 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.706023 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.707560 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:55 crc kubenswrapper[4761]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:55 crc kubenswrapper[4761]: set -uo pipefail Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 07 07:50:55 crc kubenswrapper[4761]: HOSTS_FILE="/etc/hosts" Mar 07 07:50:55 crc kubenswrapper[4761]: TEMP_FILE="/etc/hosts.tmp" Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: # Make a temporary file with the old hosts file's attributes. Mar 07 07:50:55 crc kubenswrapper[4761]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 07 07:50:55 crc kubenswrapper[4761]: echo "Failed to preserve hosts file. Exiting." Mar 07 07:50:55 crc kubenswrapper[4761]: exit 1 Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: while true; do Mar 07 07:50:55 crc kubenswrapper[4761]: declare -A svc_ips Mar 07 07:50:55 crc kubenswrapper[4761]: for svc in "${services[@]}"; do Mar 07 07:50:55 crc kubenswrapper[4761]: # Fetch service IP from cluster dns if present. We make several tries Mar 07 07:50:55 crc kubenswrapper[4761]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 07 07:50:55 crc kubenswrapper[4761]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 07 07:50:55 crc kubenswrapper[4761]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 07 07:50:55 crc kubenswrapper[4761]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:55 crc kubenswrapper[4761]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:55 crc kubenswrapper[4761]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:55 crc kubenswrapper[4761]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 07 07:50:55 crc kubenswrapper[4761]: for i in ${!cmds[*]} Mar 07 07:50:55 crc kubenswrapper[4761]: do Mar 07 07:50:55 crc kubenswrapper[4761]: ips=($(eval "${cmds[i]}")) Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: svc_ips["${svc}"]="${ips[@]}" Mar 07 07:50:55 crc kubenswrapper[4761]: break Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: done Mar 07 07:50:55 crc kubenswrapper[4761]: done Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: # Update /etc/hosts only if we get valid service IPs Mar 07 07:50:55 crc kubenswrapper[4761]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 07 07:50:55 crc kubenswrapper[4761]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 07 07:50:55 crc kubenswrapper[4761]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 07 07:50:55 crc kubenswrapper[4761]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 07 07:50:55 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:55 crc kubenswrapper[4761]: continue Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: # Append resolver entries for services Mar 07 07:50:55 crc kubenswrapper[4761]: rc=0 Mar 07 07:50:55 crc kubenswrapper[4761]: for svc in "${!svc_ips[@]}"; do Mar 07 07:50:55 crc kubenswrapper[4761]: for ip in ${svc_ips[${svc}]}; do Mar 07 07:50:55 crc kubenswrapper[4761]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 07 07:50:55 crc kubenswrapper[4761]: done Mar 07 07:50:55 crc kubenswrapper[4761]: done Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ $rc -ne 0 ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:55 crc kubenswrapper[4761]: continue Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 07 07:50:55 crc kubenswrapper[4761]: # Replace /etc/hosts with our modified version if needed Mar 07 07:50:55 crc kubenswrapper[4761]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 07 07:50:55 crc kubenswrapper[4761]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:55 crc kubenswrapper[4761]: unset svc_ips Mar 07 07:50:55 crc kubenswrapper[4761]: done Mar 07 07:50:55 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t7fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bfzp8_openshift-dns(b293cb75-0655-49e5-811c-14da8b769d26): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:55 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.708745 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bfzp8" podUID="b293cb75-0655-49e5-811c-14da8b769d26" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.769218 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.769292 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.769305 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.769322 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.769333 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.809653 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: W0307 07:50:55.822272 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9d2eccd_e600_437b_b36a_a3ed8e383128.slice/crio-7767a76fb82c5eb7f085750ba99e084ac3accb5e00225dcea2b2d6712c8f4481 WatchSource:0}: Error finding container 7767a76fb82c5eb7f085750ba99e084ac3accb5e00225dcea2b2d6712c8f4481: Status 404 returned error can't find the container with id 7767a76fb82c5eb7f085750ba99e084ac3accb5e00225dcea2b2d6712c8f4481 Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.824759 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:55 crc kubenswrapper[4761]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:55 crc kubenswrapper[4761]: set -euo pipefail Mar 07 07:50:55 crc kubenswrapper[4761]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 07 07:50:55 crc kubenswrapper[4761]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 07 07:50:55 crc kubenswrapper[4761]: # As the secret mount is optional we must wait for the files to be present. Mar 07 07:50:55 crc kubenswrapper[4761]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 07 07:50:55 crc kubenswrapper[4761]: TS=$(date +%s) Mar 07 07:50:55 crc kubenswrapper[4761]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 07 07:50:55 crc kubenswrapper[4761]: HAS_LOGGED_INFO=0 Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: log_missing_certs(){ Mar 07 07:50:55 crc kubenswrapper[4761]: CUR_TS=$(date +%s) Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 07 07:50:55 crc kubenswrapper[4761]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 07 07:50:55 crc kubenswrapper[4761]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 07 07:50:55 crc kubenswrapper[4761]: HAS_LOGGED_INFO=1 Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: } Mar 07 07:50:55 crc kubenswrapper[4761]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 07 07:50:55 crc kubenswrapper[4761]: log_missing_certs Mar 07 07:50:55 crc kubenswrapper[4761]: sleep 5 Mar 07 07:50:55 crc kubenswrapper[4761]: done Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 07 07:50:55 crc kubenswrapper[4761]: exec /usr/bin/kube-rbac-proxy \ Mar 07 07:50:55 crc kubenswrapper[4761]: --logtostderr \ Mar 07 07:50:55 crc kubenswrapper[4761]: --secure-listen-address=:9108 \ Mar 07 07:50:55 crc kubenswrapper[4761]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 07 07:50:55 crc kubenswrapper[4761]: --upstream=http://127.0.0.1:29108/ \ Mar 07 07:50:55 crc kubenswrapper[4761]: --tls-private-key-file=${TLS_PK} \ Mar 07 07:50:55 crc kubenswrapper[4761]: --tls-cert-file=${TLS_CERT} Mar 07 07:50:55 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcxbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-cfb62_openshift-ovn-kubernetes(c9d2eccd-e600-437b-b36a-a3ed8e383128): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:55 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.826905 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:55 crc kubenswrapper[4761]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:55 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:55 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v4_join_subnet_opt= Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v6_join_subnet_opt= Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v4_transit_switch_subnet_opt= Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v6_transit_switch_subnet_opt= Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: dns_name_resolver_enabled_flag= Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "false" == "true" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: persistent_ips_enabled_flag= Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "true" == "true" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: # This is needed so that converting clusters from GA to TP Mar 07 07:50:55 crc kubenswrapper[4761]: # will rollout control plane pods as well Mar 07 07:50:55 crc kubenswrapper[4761]: network_segmentation_enabled_flag= Mar 07 07:50:55 crc kubenswrapper[4761]: multi_network_enabled_flag= Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "true" == "true" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: multi_network_enabled_flag="--enable-multi-network" Mar 07 07:50:55 crc kubenswrapper[4761]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 07 07:50:55 crc kubenswrapper[4761]: exec /usr/bin/ovnkube \ Mar 07 07:50:55 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:50:55 crc kubenswrapper[4761]: --init-cluster-manager "${K8S_NODE}" \ Mar 07 07:50:55 crc kubenswrapper[4761]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 07 07:50:55 crc kubenswrapper[4761]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 07 07:50:55 crc kubenswrapper[4761]: --metrics-bind-address "127.0.0.1:29108" \ Mar 07 07:50:55 crc kubenswrapper[4761]: --metrics-enable-pprof \ Mar 07 07:50:55 crc kubenswrapper[4761]: --metrics-enable-config-duration \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${ovn_v4_join_subnet_opt} \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${ovn_v6_join_subnet_opt} \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${dns_name_resolver_enabled_flag} \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${persistent_ips_enabled_flag} \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${multi_network_enabled_flag} \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${network_segmentation_enabled_flag} Mar 07 07:50:55 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcxbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-cfb62_openshift-ovn-kubernetes(c9d2eccd-e600-437b-b36a-a3ed8e383128): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:55 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.828112 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" podUID="c9d2eccd-e600-437b-b36a-a3ed8e383128" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.871710 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.871772 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.871783 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.871799 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.871811 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.974162 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.974205 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.974215 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.974229 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.974242 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.076979 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.077019 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.077028 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.077042 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.077052 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.179326 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.179422 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.179438 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.179463 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.179480 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.217993 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-9pvvx"] Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.219363 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.219454 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.235630 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.246359 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgzfv\" (UniqueName: \"kubernetes.io/projected/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-kube-api-access-jgzfv\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.246531 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.250897 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.259430 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.272051 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.280747 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.282051 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.282090 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.282105 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.282120 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.282131 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.290260 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.299224 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.307857 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.315289 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.320882 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.327392 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.341903 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.347314 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgzfv\" (UniqueName: \"kubernetes.io/projected/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-kube-api-access-jgzfv\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.347396 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.347557 4761 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.347641 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs podName:d879fe59-4c7f-4af7-8c06-f3462f8e07d9 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:56.847617587 +0000 UTC m=+113.756784072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs") pod "network-metrics-daemon-9pvvx" (UID: "d879fe59-4c7f-4af7-8c06-f3462f8e07d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.349426 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.358487 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" event={"ID":"c9d2eccd-e600-437b-b36a-a3ed8e383128","Type":"ContainerStarted","Data":"7767a76fb82c5eb7f085750ba99e084ac3accb5e00225dcea2b2d6712c8f4481"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.359815 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.360657 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:56 crc kubenswrapper[4761]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:56 crc kubenswrapper[4761]: set -euo pipefail Mar 07 07:50:56 crc kubenswrapper[4761]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 07 07:50:56 crc kubenswrapper[4761]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 07 07:50:56 crc kubenswrapper[4761]: # As the secret mount is optional we must wait for the files to be present. Mar 07 07:50:56 crc kubenswrapper[4761]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 07 07:50:56 crc kubenswrapper[4761]: TS=$(date +%s) Mar 07 07:50:56 crc kubenswrapper[4761]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 07 07:50:56 crc kubenswrapper[4761]: HAS_LOGGED_INFO=0 Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: log_missing_certs(){ Mar 07 07:50:56 crc kubenswrapper[4761]: CUR_TS=$(date +%s) Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 07 07:50:56 crc kubenswrapper[4761]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 07 07:50:56 crc kubenswrapper[4761]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 07 07:50:56 crc kubenswrapper[4761]: HAS_LOGGED_INFO=1 Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: } Mar 07 07:50:56 crc kubenswrapper[4761]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 07 07:50:56 crc kubenswrapper[4761]: log_missing_certs Mar 07 07:50:56 crc kubenswrapper[4761]: sleep 5 Mar 07 07:50:56 crc kubenswrapper[4761]: done Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 07 07:50:56 crc kubenswrapper[4761]: exec /usr/bin/kube-rbac-proxy \ Mar 07 07:50:56 crc kubenswrapper[4761]: --logtostderr \ Mar 07 07:50:56 crc kubenswrapper[4761]: --secure-listen-address=:9108 \ Mar 07 07:50:56 crc kubenswrapper[4761]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 07 07:50:56 crc kubenswrapper[4761]: --upstream=http://127.0.0.1:29108/ \ Mar 07 07:50:56 crc kubenswrapper[4761]: --tls-private-key-file=${TLS_PK} \ Mar 07 07:50:56 crc kubenswrapper[4761]: --tls-cert-file=${TLS_CERT} Mar 07 07:50:56 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcxbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-cfb62_openshift-ovn-kubernetes(c9d2eccd-e600-437b-b36a-a3ed8e383128): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:56 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.363323 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgzfv\" (UniqueName: \"kubernetes.io/projected/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-kube-api-access-jgzfv\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.364185 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:56 crc kubenswrapper[4761]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:56 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:56 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v4_join_subnet_opt= Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v6_join_subnet_opt= Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v4_transit_switch_subnet_opt= Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v6_transit_switch_subnet_opt= Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: dns_name_resolver_enabled_flag= Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "false" == "true" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: persistent_ips_enabled_flag= Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "true" == "true" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: # This is needed so that converting clusters from GA to TP Mar 07 07:50:56 crc kubenswrapper[4761]: # will rollout control plane pods as well Mar 07 07:50:56 crc kubenswrapper[4761]: network_segmentation_enabled_flag= Mar 07 07:50:56 crc kubenswrapper[4761]: multi_network_enabled_flag= Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "true" == "true" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: multi_network_enabled_flag="--enable-multi-network" Mar 07 07:50:56 crc kubenswrapper[4761]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 07 07:50:56 crc kubenswrapper[4761]: exec /usr/bin/ovnkube \ Mar 07 07:50:56 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:50:56 crc kubenswrapper[4761]: --init-cluster-manager "${K8S_NODE}" \ Mar 07 07:50:56 crc kubenswrapper[4761]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 07 07:50:56 crc kubenswrapper[4761]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 07 07:50:56 crc kubenswrapper[4761]: --metrics-bind-address "127.0.0.1:29108" \ Mar 07 07:50:56 crc kubenswrapper[4761]: --metrics-enable-pprof \ Mar 07 07:50:56 crc kubenswrapper[4761]: --metrics-enable-config-duration \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${ovn_v4_join_subnet_opt} \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${ovn_v6_join_subnet_opt} \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${dns_name_resolver_enabled_flag} \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${persistent_ips_enabled_flag} \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${multi_network_enabled_flag} \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${network_segmentation_enabled_flag} Mar 07 07:50:56 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcxbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-cfb62_openshift-ovn-kubernetes(c9d2eccd-e600-437b-b36a-a3ed8e383128): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:56 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.366871 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" podUID="c9d2eccd-e600-437b-b36a-a3ed8e383128" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.380328 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.384058 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.384100 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.384112 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.384129 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.384142 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.389536 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.398536 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.407994 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.430390 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.438585 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.465538 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.475244 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.486838 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.486880 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.486889 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.486904 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.486915 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.492008 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.499528 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.507899 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.517657 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.524917 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.535153 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.543765 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.550602 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.559197 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.568673 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.589373 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.589407 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.589416 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.589430 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.589439 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.692507 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.692550 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.692561 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.692579 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.692593 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.707091 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j77hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-p8mn8_openshift-multus(66842cd2-650d-4f30-b620-d0b0e40d8f46): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.707293 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rq94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.707665 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:56 crc kubenswrapper[4761]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 07 07:50:56 crc kubenswrapper[4761]: apiVersion: v1 Mar 07 07:50:56 crc kubenswrapper[4761]: clusters: Mar 07 07:50:56 crc kubenswrapper[4761]: - cluster: Mar 07 07:50:56 crc kubenswrapper[4761]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 07 07:50:56 crc kubenswrapper[4761]: server: https://api-int.crc.testing:6443 Mar 07 07:50:56 crc kubenswrapper[4761]: name: default-cluster Mar 07 07:50:56 crc kubenswrapper[4761]: contexts: Mar 07 07:50:56 crc kubenswrapper[4761]: - context: Mar 07 07:50:56 crc kubenswrapper[4761]: cluster: default-cluster Mar 07 07:50:56 crc kubenswrapper[4761]: namespace: default Mar 07 07:50:56 crc kubenswrapper[4761]: user: default-auth Mar 07 07:50:56 crc kubenswrapper[4761]: name: default-context Mar 07 07:50:56 crc kubenswrapper[4761]: current-context: default-context Mar 07 07:50:56 crc kubenswrapper[4761]: kind: Config Mar 07 07:50:56 crc kubenswrapper[4761]: preferences: {} Mar 07 07:50:56 crc kubenswrapper[4761]: users: Mar 07 07:50:56 crc kubenswrapper[4761]: - name: default-auth Mar 07 07:50:56 crc kubenswrapper[4761]: user: Mar 07 07:50:56 crc kubenswrapper[4761]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:50:56 crc kubenswrapper[4761]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:50:56 crc kubenswrapper[4761]: EOF Mar 07 07:50:56 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5l7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-9zpnq_openshift-ovn-kubernetes(19ab486f-60a2-4522-a589-79b4c4375e53): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:56 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.708927 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.708982 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" podUID="66842cd2-650d-4f30-b620-d0b0e40d8f46" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.710561 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rq94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.711933 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.796015 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.796094 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.796114 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.796138 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.796155 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.854935 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.855170 4761 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.855273 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs podName:d879fe59-4c7f-4af7-8c06-f3462f8e07d9 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:57.855244828 +0000 UTC m=+114.764411333 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs") pod "network-metrics-daemon-9pvvx" (UID: "d879fe59-4c7f-4af7-8c06-f3462f8e07d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.898640 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.898704 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.898783 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.898816 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.898839 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.001284 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.001326 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.001337 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.001351 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.001363 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.104025 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.104096 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.104132 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.104177 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.104207 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.207857 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.207928 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.207947 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.207973 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.207990 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.310576 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.310638 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.310661 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.310689 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.310713 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.414001 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.414039 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.414050 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.414067 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.414079 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.516381 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.516435 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.516452 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.516475 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.516493 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.619431 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.619466 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.619474 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.619487 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.619496 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.705022 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.705408 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.705513 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.705638 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.705915 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.705972 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.706062 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.706283 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.707849 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:57 crc kubenswrapper[4761]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 07 07:50:57 crc kubenswrapper[4761]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 07 07:50:57 crc kubenswrapper[4761]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8j7cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-d7fhg_openshift-multus(e012dce7-a788-4dab-b758-5ace07b2c150): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:57 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.709166 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-d7fhg" podUID="e012dce7-a788-4dab-b758-5ace07b2c150" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.722044 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.722099 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.722125 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.722153 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.722177 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.824647 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.824704 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.824769 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.824808 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.824839 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.865970 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.866219 4761 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.866337 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs podName:d879fe59-4c7f-4af7-8c06-f3462f8e07d9 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:59.866310708 +0000 UTC m=+116.775477203 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs") pod "network-metrics-daemon-9pvvx" (UID: "d879fe59-4c7f-4af7-8c06-f3462f8e07d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.927576 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.927665 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.927691 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.927758 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.927784 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.030763 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.030797 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.030807 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.030828 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.030843 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.135572 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.135656 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.135687 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.135785 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.135814 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.239794 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.239861 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.239881 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.239907 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.239925 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.342435 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.342486 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.342495 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.342509 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.342518 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.446092 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.446186 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.446200 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.446224 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.446257 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.549853 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.549944 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.549961 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.549987 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.550004 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.653401 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.653481 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.653530 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.653556 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.653573 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.756918 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.756989 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.757010 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.757039 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.757063 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.860243 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.860290 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.860304 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.860323 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.860335 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.963477 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.963525 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.963543 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.963566 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.963582 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.066426 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.066469 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.066518 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.066539 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.066550 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.168750 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.168807 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.168824 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.168847 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.168865 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.271887 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.271945 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.271963 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.271987 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.272003 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.374068 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.374148 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.374163 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.374192 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.374210 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.477555 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.477632 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.477649 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.477678 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.477696 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.580189 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.580242 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.580259 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.580283 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.580300 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.682371 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.682424 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.682435 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.682453 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.682465 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.705445 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.705554 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.705557 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.705486 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:59 crc kubenswrapper[4761]: E0307 07:50:59.705694 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:59 crc kubenswrapper[4761]: E0307 07:50:59.705842 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:59 crc kubenswrapper[4761]: E0307 07:50:59.705934 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:59 crc kubenswrapper[4761]: E0307 07:50:59.706081 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.785398 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.785434 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.785444 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.785458 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.785485 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.887735 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:59 crc kubenswrapper[4761]: E0307 07:50:59.887828 4761 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:59 crc kubenswrapper[4761]: E0307 07:50:59.887885 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs podName:d879fe59-4c7f-4af7-8c06-f3462f8e07d9 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:03.887867263 +0000 UTC m=+120.797033738 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs") pod "network-metrics-daemon-9pvvx" (UID: "d879fe59-4c7f-4af7-8c06-f3462f8e07d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.888200 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.888220 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.888229 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.888244 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.888256 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.990109 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.990169 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.990191 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.990220 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.990242 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.092665 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.092762 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.092788 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.092815 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.092840 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.195208 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.195252 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.195268 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.195284 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.195294 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.297429 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.297503 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.297518 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.297539 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.297555 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.399696 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.399794 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.399812 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.399844 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.399861 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.502120 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.502152 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.502160 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.502172 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.502180 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.604211 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.604262 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.604278 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.604298 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.604313 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: E0307 07:51:00.706449 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.706496 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.706524 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.706539 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.706557 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.706636 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: E0307 07:51:00.707297 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:00 crc kubenswrapper[4761]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 07 07:51:00 crc kubenswrapper[4761]: while [ true ]; Mar 07 07:51:00 crc kubenswrapper[4761]: do Mar 07 07:51:00 crc kubenswrapper[4761]: for f in $(ls /tmp/serviceca); do Mar 07 07:51:00 crc kubenswrapper[4761]: echo $f Mar 07 07:51:00 crc kubenswrapper[4761]: ca_file_path="/tmp/serviceca/${f}" Mar 07 07:51:00 crc kubenswrapper[4761]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 07 07:51:00 crc kubenswrapper[4761]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 07 07:51:00 crc kubenswrapper[4761]: if [ -e "${reg_dir_path}" ]; then Mar 07 07:51:00 crc kubenswrapper[4761]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 07 07:51:00 crc kubenswrapper[4761]: else Mar 07 07:51:00 crc kubenswrapper[4761]: mkdir $reg_dir_path Mar 07 07:51:00 crc kubenswrapper[4761]: cp $ca_file_path $reg_dir_path/ca.crt Mar 07 07:51:00 crc kubenswrapper[4761]: fi Mar 07 07:51:00 crc kubenswrapper[4761]: done Mar 07 07:51:00 crc kubenswrapper[4761]: for d in $(ls /etc/docker/certs.d); do Mar 07 07:51:00 crc kubenswrapper[4761]: echo $d Mar 07 07:51:00 crc kubenswrapper[4761]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 07 07:51:00 crc kubenswrapper[4761]: reg_conf_path="/tmp/serviceca/${dp}" Mar 07 07:51:00 crc kubenswrapper[4761]: if [ ! -e "${reg_conf_path}" ]; then Mar 07 07:51:00 crc kubenswrapper[4761]: rm -rf /etc/docker/certs.d/$d Mar 07 07:51:00 crc kubenswrapper[4761]: fi Mar 07 07:51:00 crc kubenswrapper[4761]: done Mar 07 07:51:00 crc kubenswrapper[4761]: sleep 60 & wait ${!} Mar 07 07:51:00 crc kubenswrapper[4761]: done Mar 07 07:51:00 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgsp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-tbbjn_openshift-image-registry(bae31fe3-35c2-49ba-a314-78ade009741c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:00 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:00 crc kubenswrapper[4761]: E0307 07:51:00.708812 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 07 07:51:00 crc kubenswrapper[4761]: E0307 07:51:00.708842 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-tbbjn" podUID="bae31fe3-35c2-49ba-a314-78ade009741c" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.809102 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.809144 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.809153 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.809168 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.809177 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.911467 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.911503 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.911511 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.911524 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.911533 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.013904 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.013958 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.013968 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.013985 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.013995 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.116366 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.116402 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.116412 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.116425 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.116434 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.217952 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.217984 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.217992 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.218005 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.218014 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.320326 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.320358 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.320367 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.320380 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.320389 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.424030 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.424076 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.424096 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.424118 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.424134 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.526889 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.526927 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.526943 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.526962 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.526977 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.629533 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.629564 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.629574 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.629591 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.629601 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.705657 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:01 crc kubenswrapper[4761]: E0307 07:51:01.705827 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.705855 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.705665 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:01 crc kubenswrapper[4761]: E0307 07:51:01.705980 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.706020 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:01 crc kubenswrapper[4761]: E0307 07:51:01.706060 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:01 crc kubenswrapper[4761]: E0307 07:51:01.706101 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.731625 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.731662 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.731674 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.731690 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.731701 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.834041 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.834102 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.834121 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.834147 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.834166 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.936806 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.936889 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.936913 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.936956 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.936978 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.040001 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.040074 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.040098 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.040127 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.040154 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.052545 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.052608 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.052625 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.052648 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.052665 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.054103 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.054246 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.054335 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:34.054299105 +0000 UTC m=+150.963465620 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.054388 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.054489 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.054519 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:34.054481129 +0000 UTC m=+150.963647644 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.054822 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.054963 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:34.05493024 +0000 UTC m=+150.964096815 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.069500 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.075252 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.075321 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.075381 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.075415 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.075438 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.090672 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.095584 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.095643 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.095660 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.095689 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.095713 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.108182 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.113365 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.113445 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.113469 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.113500 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.113522 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.126454 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.129992 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.130055 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.130075 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.130100 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.130116 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.139608 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.139891 4761 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.142050 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.142106 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.142124 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.142149 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.142221 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.155570 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.155668 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.155788 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.155817 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.155830 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.155951 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:34.155910781 +0000 UTC m=+151.065077326 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.155845 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.156003 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.156020 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.156070 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:34.156053655 +0000 UTC m=+151.065220200 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.244994 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.245033 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.245043 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.245059 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.245069 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.347615 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.347661 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.347672 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.347687 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.347700 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.450062 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.450136 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.450154 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.450180 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.450198 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.553362 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.553404 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.553412 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.553426 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.553435 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.656029 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.656061 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.656069 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.656082 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.656092 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.759034 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.759094 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.759104 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.759117 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.759126 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.861498 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.861545 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.861557 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.861572 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.861583 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.964491 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.964543 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.964555 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.964572 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.964613 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.067159 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.067219 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.067235 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.067259 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.067275 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:03Z","lastTransitionTime":"2026-03-07T07:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.170339 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.170419 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.170442 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.170474 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.170492 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:03Z","lastTransitionTime":"2026-03-07T07:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.273276 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.273332 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.273348 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.273366 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.273379 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:03Z","lastTransitionTime":"2026-03-07T07:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.375605 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.375663 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.375674 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.375697 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.375713 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:03Z","lastTransitionTime":"2026-03-07T07:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.479190 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.479268 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.479291 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.479321 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.479342 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:03Z","lastTransitionTime":"2026-03-07T07:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.582773 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.582845 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.582869 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.582900 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.582923 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:03Z","lastTransitionTime":"2026-03-07T07:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:03 crc kubenswrapper[4761]: E0307 07:51:03.683224 4761 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.705021 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.705081 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.705144 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:03 crc kubenswrapper[4761]: E0307 07:51:03.705192 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.705207 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:03 crc kubenswrapper[4761]: E0307 07:51:03.705590 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:03 crc kubenswrapper[4761]: E0307 07:51:03.705736 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:03 crc kubenswrapper[4761]: E0307 07:51:03.705907 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.725075 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.737999 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.748865 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.757675 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.766822 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.791764 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.802555 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.823046 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.834449 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.844352 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.853175 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.864232 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.872727 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.879212 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.891074 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.898083 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.973980 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:03 crc kubenswrapper[4761]: E0307 07:51:03.974243 4761 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:51:03 crc kubenswrapper[4761]: E0307 07:51:03.974354 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs podName:d879fe59-4c7f-4af7-8c06-f3462f8e07d9 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:11.974323454 +0000 UTC m=+128.883489969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs") pod "network-metrics-daemon-9pvvx" (UID: "d879fe59-4c7f-4af7-8c06-f3462f8e07d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:51:04 crc kubenswrapper[4761]: E0307 07:51:04.212407 4761 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 07:51:04 crc kubenswrapper[4761]: E0307 07:51:04.709023 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:04 crc kubenswrapper[4761]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:51:04 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:51:04 crc kubenswrapper[4761]: set -o allexport Mar 07 07:51:04 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:51:04 crc kubenswrapper[4761]: set +o allexport Mar 07 07:51:04 crc kubenswrapper[4761]: fi Mar 07 07:51:04 crc kubenswrapper[4761]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 07 07:51:04 crc kubenswrapper[4761]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 07 07:51:04 crc kubenswrapper[4761]: ho_enable="--enable-hybrid-overlay" Mar 07 07:51:04 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 07 07:51:04 crc kubenswrapper[4761]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 07 07:51:04 crc kubenswrapper[4761]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 07 07:51:04 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:51:04 crc kubenswrapper[4761]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 07 07:51:04 crc kubenswrapper[4761]: --webhook-host=127.0.0.1 \ Mar 07 07:51:04 crc kubenswrapper[4761]: --webhook-port=9743 \ Mar 07 07:51:04 crc kubenswrapper[4761]: ${ho_enable} \ Mar 07 07:51:04 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:51:04 crc kubenswrapper[4761]: --disable-approver \ Mar 07 07:51:04 crc kubenswrapper[4761]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 07 07:51:04 crc kubenswrapper[4761]: --wait-for-kubernetes-api=200s \ Mar 07 07:51:04 crc kubenswrapper[4761]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 07 07:51:04 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:51:04 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:04 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:04 crc kubenswrapper[4761]: E0307 07:51:04.711016 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:04 crc kubenswrapper[4761]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:51:04 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:51:04 crc kubenswrapper[4761]: set -o allexport Mar 07 07:51:04 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:51:04 crc kubenswrapper[4761]: set +o allexport Mar 07 07:51:04 crc kubenswrapper[4761]: fi Mar 07 07:51:04 crc kubenswrapper[4761]: Mar 07 07:51:04 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 07 07:51:04 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:51:04 crc kubenswrapper[4761]: --disable-webhook \ Mar 07 07:51:04 crc kubenswrapper[4761]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 07 07:51:04 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:51:04 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:04 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:04 crc kubenswrapper[4761]: E0307 07:51:04.712470 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.330273 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.345302 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.356112 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.373526 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.384678 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.400981 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.412738 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.421473 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.434080 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.448165 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.457209 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.498363 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.508047 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.517920 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.532251 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.539658 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.546402 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.704702 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.704805 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:05 crc kubenswrapper[4761]: E0307 07:51:05.704950 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.704704 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.705059 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:05 crc kubenswrapper[4761]: E0307 07:51:05.705214 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:05 crc kubenswrapper[4761]: E0307 07:51:05.705330 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:05 crc kubenswrapper[4761]: E0307 07:51:05.705494 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:07 crc kubenswrapper[4761]: I0307 07:51:07.704813 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:07 crc kubenswrapper[4761]: I0307 07:51:07.704975 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:07 crc kubenswrapper[4761]: I0307 07:51:07.705024 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.704980 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:07 crc kubenswrapper[4761]: I0307 07:51:07.704813 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.705826 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.706121 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.706386 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.708095 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j77hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-p8mn8_openshift-multus(66842cd2-650d-4f30-b620-d0b0e40d8f46): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.709959 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" podUID="66842cd2-650d-4f30-b620-d0b0e40d8f46" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.714363 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:07 crc kubenswrapper[4761]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 07 07:51:07 crc kubenswrapper[4761]: set -euo pipefail Mar 07 07:51:07 crc kubenswrapper[4761]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 07 07:51:07 crc kubenswrapper[4761]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 07 07:51:07 crc kubenswrapper[4761]: # As the secret mount is optional we must wait for the files to be present. Mar 07 07:51:07 crc kubenswrapper[4761]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 07 07:51:07 crc kubenswrapper[4761]: TS=$(date +%s) Mar 07 07:51:07 crc kubenswrapper[4761]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 07 07:51:07 crc kubenswrapper[4761]: HAS_LOGGED_INFO=0 Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: log_missing_certs(){ Mar 07 07:51:07 crc kubenswrapper[4761]: CUR_TS=$(date +%s) Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 07 07:51:07 crc kubenswrapper[4761]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 07 07:51:07 crc kubenswrapper[4761]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 07 07:51:07 crc kubenswrapper[4761]: HAS_LOGGED_INFO=1 Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: } Mar 07 07:51:07 crc kubenswrapper[4761]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 07 07:51:07 crc kubenswrapper[4761]: log_missing_certs Mar 07 07:51:07 crc kubenswrapper[4761]: sleep 5 Mar 07 07:51:07 crc kubenswrapper[4761]: done Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 07 07:51:07 crc kubenswrapper[4761]: exec /usr/bin/kube-rbac-proxy \ Mar 07 07:51:07 crc kubenswrapper[4761]: --logtostderr \ Mar 07 07:51:07 crc kubenswrapper[4761]: --secure-listen-address=:9108 \ Mar 07 07:51:07 crc kubenswrapper[4761]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 07 07:51:07 crc kubenswrapper[4761]: --upstream=http://127.0.0.1:29108/ \ Mar 07 07:51:07 crc kubenswrapper[4761]: --tls-private-key-file=${TLS_PK} \ Mar 07 07:51:07 crc kubenswrapper[4761]: --tls-cert-file=${TLS_CERT} Mar 07 07:51:07 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcxbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-cfb62_openshift-ovn-kubernetes(c9d2eccd-e600-437b-b36a-a3ed8e383128): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:07 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.717251 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:07 crc kubenswrapper[4761]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: set -o allexport Mar 07 07:51:07 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:51:07 crc kubenswrapper[4761]: set +o allexport Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v4_join_subnet_opt= Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v6_join_subnet_opt= Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v4_transit_switch_subnet_opt= Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v6_transit_switch_subnet_opt= Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: dns_name_resolver_enabled_flag= Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "false" == "true" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: persistent_ips_enabled_flag= Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "true" == "true" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: # This is needed so that converting clusters from GA to TP Mar 07 07:51:07 crc kubenswrapper[4761]: # will rollout control plane pods as well Mar 07 07:51:07 crc kubenswrapper[4761]: network_segmentation_enabled_flag= Mar 07 07:51:07 crc kubenswrapper[4761]: multi_network_enabled_flag= Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "true" == "true" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: multi_network_enabled_flag="--enable-multi-network" Mar 07 07:51:07 crc kubenswrapper[4761]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 07 07:51:07 crc kubenswrapper[4761]: exec /usr/bin/ovnkube \ Mar 07 07:51:07 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:51:07 crc kubenswrapper[4761]: --init-cluster-manager "${K8S_NODE}" \ Mar 07 07:51:07 crc kubenswrapper[4761]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 07 07:51:07 crc kubenswrapper[4761]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 07 07:51:07 crc kubenswrapper[4761]: --metrics-bind-address "127.0.0.1:29108" \ Mar 07 07:51:07 crc kubenswrapper[4761]: --metrics-enable-pprof \ Mar 07 07:51:07 crc kubenswrapper[4761]: --metrics-enable-config-duration \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${ovn_v4_join_subnet_opt} \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${ovn_v6_join_subnet_opt} \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${dns_name_resolver_enabled_flag} \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${persistent_ips_enabled_flag} \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${multi_network_enabled_flag} \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${network_segmentation_enabled_flag} Mar 07 07:51:07 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcxbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-cfb62_openshift-ovn-kubernetes(c9d2eccd-e600-437b-b36a-a3ed8e383128): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:07 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.718884 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" podUID="c9d2eccd-e600-437b-b36a-a3ed8e383128" Mar 07 07:51:08 crc kubenswrapper[4761]: E0307 07:51:08.707269 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:08 crc kubenswrapper[4761]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 07 07:51:08 crc kubenswrapper[4761]: set -uo pipefail Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 07 07:51:08 crc kubenswrapper[4761]: HOSTS_FILE="/etc/hosts" Mar 07 07:51:08 crc kubenswrapper[4761]: TEMP_FILE="/etc/hosts.tmp" Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: # Make a temporary file with the old hosts file's attributes. Mar 07 07:51:08 crc kubenswrapper[4761]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 07 07:51:08 crc kubenswrapper[4761]: echo "Failed to preserve hosts file. Exiting." Mar 07 07:51:08 crc kubenswrapper[4761]: exit 1 Mar 07 07:51:08 crc kubenswrapper[4761]: fi Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: while true; do Mar 07 07:51:08 crc kubenswrapper[4761]: declare -A svc_ips Mar 07 07:51:08 crc kubenswrapper[4761]: for svc in "${services[@]}"; do Mar 07 07:51:08 crc kubenswrapper[4761]: # Fetch service IP from cluster dns if present. We make several tries Mar 07 07:51:08 crc kubenswrapper[4761]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 07 07:51:08 crc kubenswrapper[4761]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 07 07:51:08 crc kubenswrapper[4761]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 07 07:51:08 crc kubenswrapper[4761]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:51:08 crc kubenswrapper[4761]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:51:08 crc kubenswrapper[4761]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:51:08 crc kubenswrapper[4761]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 07 07:51:08 crc kubenswrapper[4761]: for i in ${!cmds[*]} Mar 07 07:51:08 crc kubenswrapper[4761]: do Mar 07 07:51:08 crc kubenswrapper[4761]: ips=($(eval "${cmds[i]}")) Mar 07 07:51:08 crc kubenswrapper[4761]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 07 07:51:08 crc kubenswrapper[4761]: svc_ips["${svc}"]="${ips[@]}" Mar 07 07:51:08 crc kubenswrapper[4761]: break Mar 07 07:51:08 crc kubenswrapper[4761]: fi Mar 07 07:51:08 crc kubenswrapper[4761]: done Mar 07 07:51:08 crc kubenswrapper[4761]: done Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: # Update /etc/hosts only if we get valid service IPs Mar 07 07:51:08 crc kubenswrapper[4761]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 07 07:51:08 crc kubenswrapper[4761]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 07 07:51:08 crc kubenswrapper[4761]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 07 07:51:08 crc kubenswrapper[4761]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 07 07:51:08 crc kubenswrapper[4761]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 07 07:51:08 crc kubenswrapper[4761]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 07 07:51:08 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:51:08 crc kubenswrapper[4761]: continue Mar 07 07:51:08 crc kubenswrapper[4761]: fi Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: # Append resolver entries for services Mar 07 07:51:08 crc kubenswrapper[4761]: rc=0 Mar 07 07:51:08 crc kubenswrapper[4761]: for svc in "${!svc_ips[@]}"; do Mar 07 07:51:08 crc kubenswrapper[4761]: for ip in ${svc_ips[${svc}]}; do Mar 07 07:51:08 crc kubenswrapper[4761]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 07 07:51:08 crc kubenswrapper[4761]: done Mar 07 07:51:08 crc kubenswrapper[4761]: done Mar 07 07:51:08 crc kubenswrapper[4761]: if [[ $rc -ne 0 ]]; then Mar 07 07:51:08 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:51:08 crc kubenswrapper[4761]: continue Mar 07 07:51:08 crc kubenswrapper[4761]: fi Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 07 07:51:08 crc kubenswrapper[4761]: # Replace /etc/hosts with our modified version if needed Mar 07 07:51:08 crc kubenswrapper[4761]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 07 07:51:08 crc kubenswrapper[4761]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 07 07:51:08 crc kubenswrapper[4761]: fi Mar 07 07:51:08 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:51:08 crc kubenswrapper[4761]: unset svc_ips Mar 07 07:51:08 crc kubenswrapper[4761]: done Mar 07 07:51:08 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t7fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bfzp8_openshift-dns(b293cb75-0655-49e5-811c-14da8b769d26): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:08 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:08 crc kubenswrapper[4761]: E0307 07:51:08.708452 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bfzp8" podUID="b293cb75-0655-49e5-811c-14da8b769d26" Mar 07 07:51:08 crc kubenswrapper[4761]: E0307 07:51:08.708609 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:08 crc kubenswrapper[4761]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 07 07:51:08 crc kubenswrapper[4761]: apiVersion: v1 Mar 07 07:51:08 crc kubenswrapper[4761]: clusters: Mar 07 07:51:08 crc kubenswrapper[4761]: - cluster: Mar 07 07:51:08 crc kubenswrapper[4761]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 07 07:51:08 crc kubenswrapper[4761]: server: https://api-int.crc.testing:6443 Mar 07 07:51:08 crc kubenswrapper[4761]: name: default-cluster Mar 07 07:51:08 crc kubenswrapper[4761]: contexts: Mar 07 07:51:08 crc kubenswrapper[4761]: - context: Mar 07 07:51:08 crc kubenswrapper[4761]: cluster: default-cluster Mar 07 07:51:08 crc kubenswrapper[4761]: namespace: default Mar 07 07:51:08 crc kubenswrapper[4761]: user: default-auth Mar 07 07:51:08 crc kubenswrapper[4761]: name: default-context Mar 07 07:51:08 crc kubenswrapper[4761]: current-context: default-context Mar 07 07:51:08 crc kubenswrapper[4761]: kind: Config Mar 07 07:51:08 crc kubenswrapper[4761]: preferences: {} Mar 07 07:51:08 crc kubenswrapper[4761]: users: Mar 07 07:51:08 crc kubenswrapper[4761]: - name: default-auth Mar 07 07:51:08 crc kubenswrapper[4761]: user: Mar 07 07:51:08 crc kubenswrapper[4761]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:51:08 crc kubenswrapper[4761]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:51:08 crc kubenswrapper[4761]: EOF Mar 07 07:51:08 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5l7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-9zpnq_openshift-ovn-kubernetes(19ab486f-60a2-4522-a589-79b4c4375e53): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:08 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:08 crc kubenswrapper[4761]: E0307 07:51:08.709277 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:08 crc kubenswrapper[4761]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 07 07:51:08 crc kubenswrapper[4761]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 07 07:51:08 crc kubenswrapper[4761]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8j7cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-d7fhg_openshift-multus(e012dce7-a788-4dab-b758-5ace07b2c150): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:08 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:08 crc kubenswrapper[4761]: E0307 07:51:08.709984 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" Mar 07 07:51:08 crc kubenswrapper[4761]: E0307 07:51:08.711227 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-d7fhg" podUID="e012dce7-a788-4dab-b758-5ace07b2c150" Mar 07 07:51:09 crc kubenswrapper[4761]: I0307 07:51:09.084107 4761 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 07 07:51:09 crc kubenswrapper[4761]: E0307 07:51:09.214518 4761 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 07:51:09 crc kubenswrapper[4761]: I0307 07:51:09.705777 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:09 crc kubenswrapper[4761]: I0307 07:51:09.705887 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:09 crc kubenswrapper[4761]: E0307 07:51:09.706128 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:09 crc kubenswrapper[4761]: I0307 07:51:09.706201 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:09 crc kubenswrapper[4761]: I0307 07:51:09.706278 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:09 crc kubenswrapper[4761]: E0307 07:51:09.706433 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:09 crc kubenswrapper[4761]: E0307 07:51:09.706595 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:09 crc kubenswrapper[4761]: E0307 07:51:09.707108 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.400666 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5c5670a681a8ba4917604b33c82abd32f8c80038b83c5af78d9696d11a0cb9ea"} Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.404325 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"e299bf5e993cd25632f1e8fdf9e29a03066a6f0ca63413030ab50ce2fd395053"} Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.404349 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897"} Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.414227 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.433114 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5670a681a8ba4917604b33c82abd32f8c80038b83c5af78d9696d11a0cb9ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.450346 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.467136 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.483609 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.512610 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.525973 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.555669 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.568866 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.584798 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.598684 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.617744 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.632850 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.644906 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.659608 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.669042 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.678598 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.686080 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.700177 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.712165 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.722571 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.734112 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.745417 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.756300 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.767034 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.779386 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.790434 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5670a681a8ba4917604b33c82abd32f8c80038b83c5af78d9696d11a0cb9ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.811672 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.825262 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e299bf5e993cd25632f1e8fdf9e29a03066a6f0ca63413030ab50ce2fd395053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.837542 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.866999 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.876099 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:11 crc kubenswrapper[4761]: I0307 07:51:11.705654 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:11 crc kubenswrapper[4761]: E0307 07:51:11.705809 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:11 crc kubenswrapper[4761]: I0307 07:51:11.705812 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:11 crc kubenswrapper[4761]: I0307 07:51:11.705846 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:11 crc kubenswrapper[4761]: I0307 07:51:11.705897 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:11 crc kubenswrapper[4761]: E0307 07:51:11.706057 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:11 crc kubenswrapper[4761]: E0307 07:51:11.706162 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:11 crc kubenswrapper[4761]: E0307 07:51:11.706235 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.075681 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.075892 4761 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.076232 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs podName:d879fe59-4c7f-4af7-8c06-f3462f8e07d9 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:28.076206119 +0000 UTC m=+144.985372634 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs") pod "network-metrics-daemon-9pvvx" (UID: "d879fe59-4c7f-4af7-8c06-f3462f8e07d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.403840 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.403906 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.403922 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.403946 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.403963 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:12Z","lastTransitionTime":"2026-03-07T07:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.422548 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.428002 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.428055 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.428071 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.428093 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.428111 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:12Z","lastTransitionTime":"2026-03-07T07:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.445355 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.450348 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.450399 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.450416 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.450440 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.450456 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:12Z","lastTransitionTime":"2026-03-07T07:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.468672 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.473244 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.473290 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.473309 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.473330 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.473345 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:12Z","lastTransitionTime":"2026-03-07T07:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.488568 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.493149 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.493202 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.493216 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.493237 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.493252 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:12Z","lastTransitionTime":"2026-03-07T07:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.504852 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.505020 4761 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.705083 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.705159 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.705195 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.705234 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:13 crc kubenswrapper[4761]: E0307 07:51:13.711355 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:13 crc kubenswrapper[4761]: E0307 07:51:13.711610 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:13 crc kubenswrapper[4761]: E0307 07:51:13.712403 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:13 crc kubenswrapper[4761]: E0307 07:51:13.712741 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.726067 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.737994 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.759393 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.769625 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.789898 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.807753 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.824785 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.840238 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.854904 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.867455 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.884033 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5670a681a8ba4917604b33c82abd32f8c80038b83c5af78d9696d11a0cb9ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.912429 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.921430 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e299bf5e993cd25632f1e8fdf9e29a03066a6f0ca63413030ab50ce2fd395053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.933614 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.953538 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.962684 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: E0307 07:51:14.215217 4761 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.422817 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tbbjn" event={"ID":"bae31fe3-35c2-49ba-a314-78ade009741c","Type":"ContainerStarted","Data":"8bc4a23cd94694cdfb9916d296336f79035a4560f61732ecd41abe3ba557d772"} Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.439366 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.450274 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.460778 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.479513 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.492050 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.508018 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5670a681a8ba4917604b33c82abd32f8c80038b83c5af78d9696d11a0cb9ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.523843 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.538581 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.553087 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.563934 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bc4a23cd94694cdfb9916d296336f79035a4560f61732ecd41abe3ba557d772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.592394 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.607938 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e299bf5e993cd25632f1e8fdf9e29a03066a6f0ca63413030ab50ce2fd395053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.625560 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.651649 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.665262 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.685812 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:15 crc kubenswrapper[4761]: I0307 07:51:15.705157 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:15 crc kubenswrapper[4761]: I0307 07:51:15.705258 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:15 crc kubenswrapper[4761]: I0307 07:51:15.705284 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:15 crc kubenswrapper[4761]: E0307 07:51:15.705372 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:15 crc kubenswrapper[4761]: E0307 07:51:15.705522 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:15 crc kubenswrapper[4761]: I0307 07:51:15.705546 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:15 crc kubenswrapper[4761]: E0307 07:51:15.705633 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:15 crc kubenswrapper[4761]: E0307 07:51:15.705827 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.433154 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"49ad2fdf0419e23920208e00718a2e001be60ed0c87766998f34e9850956f1ce"} Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.445562 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49ad2fdf0419e23920208e00718a2e001be60ed0c87766998f34e9850956f1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.456900 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.465489 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.472020 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.487744 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.498346 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.505245 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bc4a23cd94694cdfb9916d296336f79035a4560f61732ecd41abe3ba557d772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.513481 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5670a681a8ba4917604b33c82abd32f8c80038b83c5af78d9696d11a0cb9ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.526381 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.536852 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.549386 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.575834 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.587314 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.615468 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.628469 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e299bf5e993cd25632f1e8fdf9e29a03066a6f0ca63413030ab50ce2fd395053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.649448 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.705559 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.705590 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.705658 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:17 crc kubenswrapper[4761]: E0307 07:51:17.705817 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.705875 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:17 crc kubenswrapper[4761]: E0307 07:51:17.706035 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:17 crc kubenswrapper[4761]: E0307 07:51:17.706134 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:17 crc kubenswrapper[4761]: E0307 07:51:17.706225 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:19 crc kubenswrapper[4761]: E0307 07:51:19.216738 4761 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 07:51:19 crc kubenswrapper[4761]: I0307 07:51:19.705950 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:19 crc kubenswrapper[4761]: I0307 07:51:19.705991 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:19 crc kubenswrapper[4761]: I0307 07:51:19.706112 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:19 crc kubenswrapper[4761]: I0307 07:51:19.706168 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:19 crc kubenswrapper[4761]: E0307 07:51:19.706330 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:19 crc kubenswrapper[4761]: E0307 07:51:19.707136 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:19 crc kubenswrapper[4761]: E0307 07:51:19.707585 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:19 crc kubenswrapper[4761]: E0307 07:51:19.707670 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.445285 4761 generic.go:334] "Generic (PLEG): container finished" podID="66842cd2-650d-4f30-b620-d0b0e40d8f46" containerID="8545414d96a54912f40289847da722b3508a527562a8a082e45f6b3360620c9c" exitCode=0 Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.445378 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerDied","Data":"8545414d96a54912f40289847da722b3508a527562a8a082e45f6b3360620c9c"} Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.448339 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6450aff59e4a6738702467fe945060f96111849c1e200ee12a65d6f98e54bd80"} Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.448395 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"691ee135d3475dd4f598e8799c09b3bba1930f9fe53b8b591f89e25755c63e28"} Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.450558 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455" exitCode=0 Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.450617 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455"} Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.581217 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=48.581195992 podStartE2EDuration="48.581195992s" podCreationTimestamp="2026-03-07 07:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:20.580502675 +0000 UTC m=+137.489669190" watchObservedRunningTime="2026-03-07 07:51:20.581195992 +0000 UTC m=+137.490362477" Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.659402 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tbbjn" podStartSLOduration=60.659383782 podStartE2EDuration="1m0.659383782s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:20.658822848 +0000 UTC m=+137.567989343" watchObservedRunningTime="2026-03-07 07:51:20.659383782 +0000 UTC m=+137.568550257" Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.706586 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=49.706566814 podStartE2EDuration="49.706566814s" podCreationTimestamp="2026-03-07 07:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:20.693901515 +0000 UTC m=+137.603067990" watchObservedRunningTime="2026-03-07 07:51:20.706566814 +0000 UTC m=+137.615733299" Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.723147 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podStartSLOduration=60.723132789 podStartE2EDuration="1m0.723132789s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:20.707300712 +0000 UTC m=+137.616467197" watchObservedRunningTime="2026-03-07 07:51:20.723132789 +0000 UTC m=+137.632299264" Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.457696 4761 generic.go:334] "Generic (PLEG): container finished" podID="66842cd2-650d-4f30-b620-d0b0e40d8f46" containerID="de554a0d48e45b014959ce97ee9755f7f5a6ae8b7468be02f13c3edb0c3b1cf4" exitCode=0 Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.457767 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerDied","Data":"de554a0d48e45b014959ce97ee9755f7f5a6ae8b7468be02f13c3edb0c3b1cf4"} Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.465707 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269"} Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.465806 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b"} Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.465835 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59"} Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.465859 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383"} Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.465882 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9"} Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.465908 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a"} Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.705262 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.705410 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.705506 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:21 crc kubenswrapper[4761]: E0307 07:51:21.705590 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.705632 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:21 crc kubenswrapper[4761]: E0307 07:51:21.705830 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:21 crc kubenswrapper[4761]: E0307 07:51:21.705907 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:21 crc kubenswrapper[4761]: E0307 07:51:21.706005 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.731189 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.473472 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" event={"ID":"c9d2eccd-e600-437b-b36a-a3ed8e383128","Type":"ContainerStarted","Data":"ad53a131b32bee7849d760c4f23978c2fecbc0c19bc97e18cf13f9c4ed603a01"} Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.473680 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" event={"ID":"c9d2eccd-e600-437b-b36a-a3ed8e383128","Type":"ContainerStarted","Data":"8f83b10e722c6068f38081143c40639e9b6c66cb0746bc28d91d491266798f60"} Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.478613 4761 generic.go:334] "Generic (PLEG): container finished" podID="66842cd2-650d-4f30-b620-d0b0e40d8f46" containerID="68f9735c17907c7ad5cc1808c1a1ab260ffe5dda9e7848aba20fb2e91da7a10f" exitCode=0 Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.478752 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerDied","Data":"68f9735c17907c7ad5cc1808c1a1ab260ffe5dda9e7848aba20fb2e91da7a10f"} Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.494101 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=1.494077455 podStartE2EDuration="1.494077455s" podCreationTimestamp="2026-03-07 07:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:22.493811248 +0000 UTC m=+139.402977783" watchObservedRunningTime="2026-03-07 07:51:22.494077455 +0000 UTC m=+139.403243960" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.518017 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" podStartSLOduration=62.517989449 podStartE2EDuration="1m2.517989449s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:22.517660451 +0000 UTC m=+139.426826966" watchObservedRunningTime="2026-03-07 07:51:22.517989449 +0000 UTC m=+139.427155964" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.581615 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.581698 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.581757 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.581791 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.581815 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:22Z","lastTransitionTime":"2026-03-07T07:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.631126 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj"] Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.631534 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.636879 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.637079 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.637146 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.637305 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.664702 4761 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.678122 4761 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.710504 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.710543 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.710567 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.710589 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.710659 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.812082 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.812315 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.812375 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.812430 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.812488 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.812490 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.812488 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.813375 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.822634 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.840081 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.951137 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: W0307 07:51:22.966525 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35d38fc6_874d_4e9b_ad26_f3b50fd3869f.slice/crio-6a3d3f0f8ea9b06e3498645933085d8fc0db0ba1b954b8e5a5530fc864b77777 WatchSource:0}: Error finding container 6a3d3f0f8ea9b06e3498645933085d8fc0db0ba1b954b8e5a5530fc864b77777: Status 404 returned error can't find the container with id 6a3d3f0f8ea9b06e3498645933085d8fc0db0ba1b954b8e5a5530fc864b77777 Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.485882 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" event={"ID":"35d38fc6-874d-4e9b-ad26-f3b50fd3869f","Type":"ContainerStarted","Data":"1992f320bf0f514c7983002bd8e1b9de1dfff115240f9347d69e2d0396c10265"} Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.486228 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" event={"ID":"35d38fc6-874d-4e9b-ad26-f3b50fd3869f","Type":"ContainerStarted","Data":"6a3d3f0f8ea9b06e3498645933085d8fc0db0ba1b954b8e5a5530fc864b77777"} Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.492989 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40"} Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.496227 4761 generic.go:334] "Generic (PLEG): container finished" podID="66842cd2-650d-4f30-b620-d0b0e40d8f46" containerID="334b9c72ddcf94d787782d54209e295438e63f4ad510cd0ee4ccd41d8c8f1d93" exitCode=0 Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.496277 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerDied","Data":"334b9c72ddcf94d787782d54209e295438e63f4ad510cd0ee4ccd41d8c8f1d93"} Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.506035 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" podStartSLOduration=63.506013552 podStartE2EDuration="1m3.506013552s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:23.502008224 +0000 UTC m=+140.411174699" watchObservedRunningTime="2026-03-07 07:51:23.506013552 +0000 UTC m=+140.415180047" Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.705077 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.705112 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.705123 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.706375 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:23 crc kubenswrapper[4761]: E0307 07:51:23.706520 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:23 crc kubenswrapper[4761]: E0307 07:51:23.706651 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:23 crc kubenswrapper[4761]: E0307 07:51:23.707032 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:23 crc kubenswrapper[4761]: E0307 07:51:23.707162 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:24 crc kubenswrapper[4761]: E0307 07:51:24.217190 4761 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 07:51:24 crc kubenswrapper[4761]: I0307 07:51:24.502195 4761 generic.go:334] "Generic (PLEG): container finished" podID="66842cd2-650d-4f30-b620-d0b0e40d8f46" containerID="6ed8b3c6202607c045f6e81cc43f1cae698f2661513fd4231a4a87432be30238" exitCode=0 Mar 07 07:51:24 crc kubenswrapper[4761]: I0307 07:51:24.502295 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerDied","Data":"6ed8b3c6202607c045f6e81cc43f1cae698f2661513fd4231a4a87432be30238"} Mar 07 07:51:24 crc kubenswrapper[4761]: I0307 07:51:24.504154 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bfzp8" event={"ID":"b293cb75-0655-49e5-811c-14da8b769d26","Type":"ContainerStarted","Data":"7f8423c9a4d4059134f3ee40ed1282a4e83c586cdb81e420224e5d6e711e3f28"} Mar 07 07:51:24 crc kubenswrapper[4761]: I0307 07:51:24.506966 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7fhg" event={"ID":"e012dce7-a788-4dab-b758-5ace07b2c150","Type":"ContainerStarted","Data":"ade39212f5be5eba8c4c503357adbd943542b70dcf1e4a7b7f089a8ddaaf64f5"} Mar 07 07:51:24 crc kubenswrapper[4761]: I0307 07:51:24.552751 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-d7fhg" podStartSLOduration=64.552698937 podStartE2EDuration="1m4.552698937s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:24.552664887 +0000 UTC m=+141.461831362" watchObservedRunningTime="2026-03-07 07:51:24.552698937 +0000 UTC m=+141.461865422" Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.517034 4761 generic.go:334] "Generic (PLEG): container finished" podID="66842cd2-650d-4f30-b620-d0b0e40d8f46" containerID="acbecead03edaf9f2f4f248724e729b2dc30e12f15b80a9d4cb92c1a75921fb9" exitCode=0 Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.517096 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerDied","Data":"acbecead03edaf9f2f4f248724e729b2dc30e12f15b80a9d4cb92c1a75921fb9"} Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.557590 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bfzp8" podStartSLOduration=65.557568782 podStartE2EDuration="1m5.557568782s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:24.568012531 +0000 UTC m=+141.477179016" watchObservedRunningTime="2026-03-07 07:51:25.557568782 +0000 UTC m=+142.466735287" Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.705016 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.705106 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:25 crc kubenswrapper[4761]: E0307 07:51:25.705140 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.705194 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:25 crc kubenswrapper[4761]: E0307 07:51:25.705223 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.705267 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:25 crc kubenswrapper[4761]: E0307 07:51:25.705310 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:25 crc kubenswrapper[4761]: E0307 07:51:25.705380 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.716124 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 07 07:51:26 crc kubenswrapper[4761]: I0307 07:51:26.526424 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546"} Mar 07 07:51:26 crc kubenswrapper[4761]: I0307 07:51:26.526839 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:51:26 crc kubenswrapper[4761]: I0307 07:51:26.526873 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:51:26 crc kubenswrapper[4761]: I0307 07:51:26.532307 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerStarted","Data":"4ad9627b5a422971f65e6ce8d1d120c0ee05f7b10563bc9af54bc6f0c7810a54"} Mar 07 07:51:26 crc kubenswrapper[4761]: I0307 07:51:26.567020 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:51:26 crc kubenswrapper[4761]: I0307 07:51:26.573019 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podStartSLOduration=66.572993315 podStartE2EDuration="1m6.572993315s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:26.569706325 +0000 UTC m=+143.478872800" watchObservedRunningTime="2026-03-07 07:51:26.572993315 +0000 UTC m=+143.482159820" Mar 07 07:51:26 crc kubenswrapper[4761]: I0307 07:51:26.598405 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=1.598385945 podStartE2EDuration="1.598385945s" podCreationTimestamp="2026-03-07 07:51:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:26.598215071 +0000 UTC m=+143.507381586" watchObservedRunningTime="2026-03-07 07:51:26.598385945 +0000 UTC m=+143.507552450" Mar 07 07:51:27 crc kubenswrapper[4761]: I0307 07:51:27.536006 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:51:27 crc kubenswrapper[4761]: I0307 07:51:27.569676 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:51:27 crc kubenswrapper[4761]: I0307 07:51:27.614103 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" podStartSLOduration=67.614079264 podStartE2EDuration="1m7.614079264s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:26.661741803 +0000 UTC m=+143.570908338" watchObservedRunningTime="2026-03-07 07:51:27.614079264 +0000 UTC m=+144.523245739" Mar 07 07:51:27 crc kubenswrapper[4761]: I0307 07:51:27.706845 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:27 crc kubenswrapper[4761]: I0307 07:51:27.706926 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:27 crc kubenswrapper[4761]: E0307 07:51:27.706953 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:27 crc kubenswrapper[4761]: I0307 07:51:27.706985 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:27 crc kubenswrapper[4761]: I0307 07:51:27.707016 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:27 crc kubenswrapper[4761]: E0307 07:51:27.707066 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:27 crc kubenswrapper[4761]: E0307 07:51:27.707141 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:27 crc kubenswrapper[4761]: E0307 07:51:27.707209 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:28 crc kubenswrapper[4761]: I0307 07:51:28.077192 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:28 crc kubenswrapper[4761]: E0307 07:51:28.077461 4761 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:51:28 crc kubenswrapper[4761]: E0307 07:51:28.077595 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs podName:d879fe59-4c7f-4af7-8c06-f3462f8e07d9 nodeName:}" failed. No retries permitted until 2026-03-07 07:52:00.077566805 +0000 UTC m=+176.986733320 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs") pod "network-metrics-daemon-9pvvx" (UID: "d879fe59-4c7f-4af7-8c06-f3462f8e07d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:51:28 crc kubenswrapper[4761]: I0307 07:51:28.114632 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9pvvx"] Mar 07 07:51:28 crc kubenswrapper[4761]: I0307 07:51:28.538760 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:28 crc kubenswrapper[4761]: E0307 07:51:28.539264 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:29 crc kubenswrapper[4761]: E0307 07:51:29.218918 4761 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 07:51:29 crc kubenswrapper[4761]: I0307 07:51:29.705444 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:29 crc kubenswrapper[4761]: I0307 07:51:29.705472 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:29 crc kubenswrapper[4761]: E0307 07:51:29.705634 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:29 crc kubenswrapper[4761]: I0307 07:51:29.705702 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:29 crc kubenswrapper[4761]: E0307 07:51:29.706016 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:29 crc kubenswrapper[4761]: E0307 07:51:29.705872 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:30 crc kubenswrapper[4761]: I0307 07:51:30.705393 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:30 crc kubenswrapper[4761]: E0307 07:51:30.705566 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:31 crc kubenswrapper[4761]: I0307 07:51:31.705463 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:31 crc kubenswrapper[4761]: I0307 07:51:31.705602 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:31 crc kubenswrapper[4761]: E0307 07:51:31.705661 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:31 crc kubenswrapper[4761]: I0307 07:51:31.705683 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:31 crc kubenswrapper[4761]: E0307 07:51:31.705884 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:31 crc kubenswrapper[4761]: E0307 07:51:31.706040 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:32 crc kubenswrapper[4761]: I0307 07:51:32.704914 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:32 crc kubenswrapper[4761]: E0307 07:51:32.705103 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:33 crc kubenswrapper[4761]: I0307 07:51:33.705004 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:33 crc kubenswrapper[4761]: I0307 07:51:33.705030 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:33 crc kubenswrapper[4761]: I0307 07:51:33.705124 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:33 crc kubenswrapper[4761]: E0307 07:51:33.707849 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:33 crc kubenswrapper[4761]: E0307 07:51:33.707968 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:33 crc kubenswrapper[4761]: E0307 07:51:33.708091 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.140319 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.140528 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.140568 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:52:38.140522897 +0000 UTC m=+215.049689412 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.140689 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.140756 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.140840 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.140872 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:52:38.140842195 +0000 UTC m=+215.050008760 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.140911 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:52:38.140892846 +0000 UTC m=+215.050059481 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.241508 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.241625 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.241816 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.241842 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.241841 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.241907 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.241934 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.241858 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.242028 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:52:38.241993706 +0000 UTC m=+215.151160221 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.242070 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:52:38.242051827 +0000 UTC m=+215.151218432 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.705008 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.709626 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.709798 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 07 07:51:35 crc kubenswrapper[4761]: I0307 07:51:35.705804 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:35 crc kubenswrapper[4761]: I0307 07:51:35.705817 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:35 crc kubenswrapper[4761]: I0307 07:51:35.705990 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:35 crc kubenswrapper[4761]: I0307 07:51:35.708539 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 07 07:51:35 crc kubenswrapper[4761]: I0307 07:51:35.709025 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 07 07:51:35 crc kubenswrapper[4761]: I0307 07:51:35.709432 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 07 07:51:35 crc kubenswrapper[4761]: I0307 07:51:35.709871 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.777344 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.831277 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l9gzh"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.831975 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.839840 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.840856 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g5b4l"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.861152 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.861825 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.862802 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.863820 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.864212 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.865885 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.866667 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.866774 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.867369 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.867808 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.869132 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.872200 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.872597 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.873036 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.874006 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xqmxc"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.876010 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.877249 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5d2nn"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.888655 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.889515 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.889917 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.890359 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.890431 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.890459 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.890539 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.890615 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.890806 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891143 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891254 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891423 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891524 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891562 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891666 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891893 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891900 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891973 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.892002 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.892064 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.892219 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.892309 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.894755 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.895385 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.898184 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.898498 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.898668 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.898918 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.899271 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.900750 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.901290 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.902057 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.903007 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.903463 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6qsbw"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.903946 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.904394 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-n8d4g"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.904972 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.905063 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.905401 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.905911 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.906031 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.906109 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.906176 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.906197 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.906371 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.906120 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.907012 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.907152 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.907242 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.907352 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.907253 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.907527 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.907812 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.912656 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.913073 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.913310 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.914533 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fsrlc"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.914925 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2lhb8"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.915186 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zjd48"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.915544 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.916521 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.916571 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.916665 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.916825 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.916854 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.916825 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917326 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917333 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917478 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917538 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917566 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917583 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917897 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917921 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.918071 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.918079 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.918150 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.918177 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.918219 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.918279 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919194 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919256 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919316 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919413 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919482 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919422 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919602 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919751 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.920000 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.923759 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.930704 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.936778 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.937984 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2lv84"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.939851 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.940455 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941119 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcrlq\" (UniqueName: \"kubernetes.io/projected/f7d70be0-84a3-4969-bbe9-283e1588343a-kube-api-access-mcrlq\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941154 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-audit\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941180 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-client-ca\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941202 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941223 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828a167b-cf1b-433c-844a-7ca236afd4b9-config\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941238 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-config\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941269 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzrlj\" (UniqueName: \"kubernetes.io/projected/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-kube-api-access-zzrlj\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941287 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941304 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-serving-cert\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941323 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-config\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941342 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941362 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-encryption-config\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941381 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941399 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941418 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzrv9\" (UniqueName: \"kubernetes.io/projected/21e2c5a2-e968-4844-8843-23870b388e6d-kube-api-access-bzrv9\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941436 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f7d70be0-84a3-4969-bbe9-283e1588343a-node-pullsecrets\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941452 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941470 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-etcd-client\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941488 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/828a167b-cf1b-433c-844a-7ca236afd4b9-images\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941506 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ggp8\" (UniqueName: \"kubernetes.io/projected/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-kube-api-access-2ggp8\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941525 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vx7x\" (UniqueName: \"kubernetes.io/projected/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-kube-api-access-4vx7x\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941540 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/828a167b-cf1b-433c-844a-7ca236afd4b9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941558 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-client-ca\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941575 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941598 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941614 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941631 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-image-import-ca\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941670 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-serving-cert\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941689 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941709 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-config\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941742 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941769 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-serving-cert\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941798 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f7d70be0-84a3-4969-bbe9-283e1588343a-audit-dir\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941822 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vtgc\" (UniqueName: \"kubernetes.io/projected/828a167b-cf1b-433c-844a-7ca236afd4b9-kube-api-access-4vtgc\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941861 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941877 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941897 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941919 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21e2c5a2-e968-4844-8843-23870b388e6d-audit-dir\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941937 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-etcd-serving-ca\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.942436 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9vsj5"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.943522 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.944791 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-audit-policies\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.945442 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.945808 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.946455 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.946493 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.946651 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.947277 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.951502 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ls7db"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.952451 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.952505 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.953376 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.990218 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.990952 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992159 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992373 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992407 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992468 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992520 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992659 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992684 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992843 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992912 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992973 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.993312 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.993771 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.994063 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.994410 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-f57jx"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.994993 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.996807 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.998135 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.000183 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.000695 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.000960 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.001078 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.001264 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.001435 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-8vzkp"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.001882 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.001993 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.002171 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.002517 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.002687 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.005131 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.008533 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.008681 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5r998"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.009795 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k4zfw"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.009835 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.010702 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.013985 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.016822 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.017341 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.021167 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.023022 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.023503 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.032890 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.033372 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.033879 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wj76l"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.034326 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.034543 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.034749 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.034896 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.035123 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.035514 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.036064 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.039796 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045425 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21e2c5a2-e968-4844-8843-23870b388e6d-audit-dir\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045467 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-etcd-serving-ca\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045465 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l9gzh"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045489 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-service-ca-bundle\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045495 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21e2c5a2-e968-4844-8843-23870b388e6d-audit-dir\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045510 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-audit-policies\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045525 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045552 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcrlq\" (UniqueName: \"kubernetes.io/projected/f7d70be0-84a3-4969-bbe9-283e1588343a-kube-api-access-mcrlq\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045571 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-audit\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045589 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d1f4462-4337-4610-9c4b-98bc1f3974e8-config\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045605 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045623 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-client-ca\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045641 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828a167b-cf1b-433c-844a-7ca236afd4b9-config\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045656 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-config\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045681 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzrlj\" (UniqueName: \"kubernetes.io/projected/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-kube-api-access-zzrlj\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045697 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045727 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-serving-cert\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045745 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svh7l\" (UniqueName: \"kubernetes.io/projected/4d1f4462-4337-4610-9c4b-98bc1f3974e8-kube-api-access-svh7l\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045760 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-config\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045776 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b718980-7c2c-4b0f-b605-331928c5a58e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fkrlf\" (UID: \"9b718980-7c2c-4b0f-b605-331928c5a58e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045803 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-serving-cert\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045825 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045844 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045860 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-encryption-config\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045875 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkztf\" (UniqueName: \"kubernetes.io/projected/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-kube-api-access-bkztf\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045890 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d1f4462-4337-4610-9c4b-98bc1f3974e8-auth-proxy-config\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045908 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045926 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045942 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzrv9\" (UniqueName: \"kubernetes.io/projected/21e2c5a2-e968-4844-8843-23870b388e6d-kube-api-access-bzrv9\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045959 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f7d70be0-84a3-4969-bbe9-283e1588343a-node-pullsecrets\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045976 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045990 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-etcd-client\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046008 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khcnt\" (UniqueName: \"kubernetes.io/projected/9b718980-7c2c-4b0f-b605-331928c5a58e-kube-api-access-khcnt\") pod \"control-plane-machine-set-operator-78cbb6b69f-fkrlf\" (UID: \"9b718980-7c2c-4b0f-b605-331928c5a58e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046025 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ggp8\" (UniqueName: \"kubernetes.io/projected/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-kube-api-access-2ggp8\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046040 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vx7x\" (UniqueName: \"kubernetes.io/projected/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-kube-api-access-4vx7x\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046055 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/828a167b-cf1b-433c-844a-7ca236afd4b9-images\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046071 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/828a167b-cf1b-433c-844a-7ca236afd4b9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046092 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-client-ca\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046110 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046125 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046142 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046165 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-image-import-ca\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046189 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-config\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046225 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-serving-cert\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046239 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-audit-policies\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046243 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046283 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046308 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-etcd-serving-ca\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046318 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-serving-cert\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046382 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f7d70be0-84a3-4969-bbe9-283e1588343a-audit-dir\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046417 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4d1f4462-4337-4610-9c4b-98bc1f3974e8-machine-approver-tls\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046449 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vtgc\" (UniqueName: \"kubernetes.io/projected/828a167b-cf1b-433c-844a-7ca236afd4b9-kube-api-access-4vtgc\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046472 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-config\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046495 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828a167b-cf1b-433c-844a-7ca236afd4b9-config\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046504 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046582 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046950 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f7d70be0-84a3-4969-bbe9-283e1588343a-audit-dir\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.047366 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.047929 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-config\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.047983 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f7d70be0-84a3-4969-bbe9-283e1588343a-node-pullsecrets\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.048023 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-audit\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.048419 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.048487 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/828a167b-cf1b-433c-844a-7ca236afd4b9-images\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.048603 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-config\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.049277 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.049808 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-client-ca\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.050526 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-config\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.051152 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-client-ca\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.051204 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.051663 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.052764 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.053172 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-serving-cert\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.053322 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.053974 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-etcd-client\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.054254 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-encryption-config\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.054293 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-n8d4g"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.054629 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.055017 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.055454 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.055549 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-image-import-ca\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.056111 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-serving-cert\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.056732 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.056738 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.057023 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.057897 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xqmxc"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.058168 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.059769 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-serving-cert\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.059901 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.061168 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.062107 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.062148 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.063072 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/828a167b-cf1b-433c-844a-7ca236afd4b9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.063349 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g5b4l"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.064573 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5d2nn"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.065578 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6qsbw"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.066562 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.066915 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.067243 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.068798 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2lhb8"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.068860 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9475l"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.069815 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.070028 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-82d2w"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.070196 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.070438 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.070865 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k4zfw"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.071755 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.072666 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.073969 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.076043 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.076544 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2lv84"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.078532 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.079476 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fsrlc"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.080788 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5r998"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.081300 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.081457 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ls7db"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.082386 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9vsj5"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.083348 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.084367 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.085324 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.089075 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.091369 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cm8bz"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.092515 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.092647 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-lhr9n"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.093698 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.094968 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.095694 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.097085 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.098215 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.099268 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.100314 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.101357 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-f57jx"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.102458 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.103484 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9475l"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.104564 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wj76l"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.105855 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zjd48"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.106992 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.108261 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.109332 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.110340 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lhr9n"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.111299 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-c9px5"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.111929 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.112271 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c9px5"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.115177 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.134812 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147520 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4d1f4462-4337-4610-9c4b-98bc1f3974e8-machine-approver-tls\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-config\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147588 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-service-ca-bundle\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147613 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d1f4462-4337-4610-9c4b-98bc1f3974e8-config\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147644 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svh7l\" (UniqueName: \"kubernetes.io/projected/4d1f4462-4337-4610-9c4b-98bc1f3974e8-kube-api-access-svh7l\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147662 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b718980-7c2c-4b0f-b605-331928c5a58e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fkrlf\" (UID: \"9b718980-7c2c-4b0f-b605-331928c5a58e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147680 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-serving-cert\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147697 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147727 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d1f4462-4337-4610-9c4b-98bc1f3974e8-auth-proxy-config\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147744 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkztf\" (UniqueName: \"kubernetes.io/projected/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-kube-api-access-bkztf\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147767 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khcnt\" (UniqueName: \"kubernetes.io/projected/9b718980-7c2c-4b0f-b605-331928c5a58e-kube-api-access-khcnt\") pod \"control-plane-machine-set-operator-78cbb6b69f-fkrlf\" (UID: \"9b718980-7c2c-4b0f-b605-331928c5a58e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.148509 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-config\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.148646 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d1f4462-4337-4610-9c4b-98bc1f3974e8-config\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.149370 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d1f4462-4337-4610-9c4b-98bc1f3974e8-auth-proxy-config\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.149613 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.151480 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-serving-cert\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.154387 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4d1f4462-4337-4610-9c4b-98bc1f3974e8-machine-approver-tls\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.155214 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.175651 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.179173 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-service-ca-bundle\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.194764 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.214943 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.234424 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.254991 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.275473 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.282742 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b718980-7c2c-4b0f-b605-331928c5a58e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fkrlf\" (UID: \"9b718980-7c2c-4b0f-b605-331928c5a58e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.295767 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.335197 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.355248 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.375297 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.395674 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.415580 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.435521 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.455775 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.500227 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.514921 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.536158 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.556134 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.576048 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.595078 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.616012 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.636073 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.655192 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.675550 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.695380 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.715929 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.735499 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.756048 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.775504 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.794907 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.815397 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.835594 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.856136 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.874885 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.895978 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.915486 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.936525 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.965608 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.975261 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.997324 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.013394 4761 request.go:700] Waited for 1.002380514s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0 Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.024585 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.035122 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.055283 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.074767 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.095290 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.116039 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.136701 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.156880 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.157110 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.175871 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.194794 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.215005 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.235295 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.255093 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.275628 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.295741 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.315113 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.335179 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.355787 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.375513 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.395995 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.416651 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.435606 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.455375 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.475125 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.495305 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.542073 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzrlj\" (UniqueName: \"kubernetes.io/projected/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-kube-api-access-zzrlj\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.581904 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzrv9\" (UniqueName: \"kubernetes.io/projected/21e2c5a2-e968-4844-8843-23870b388e6d-kube-api-access-bzrv9\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.587338 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vx7x\" (UniqueName: \"kubernetes.io/projected/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-kube-api-access-4vx7x\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.592620 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcrlq\" (UniqueName: \"kubernetes.io/projected/f7d70be0-84a3-4969-bbe9-283e1588343a-kube-api-access-mcrlq\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.611856 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ggp8\" (UniqueName: \"kubernetes.io/projected/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-kube-api-access-2ggp8\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.634768 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.640568 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vtgc\" (UniqueName: \"kubernetes.io/projected/828a167b-cf1b-433c-844a-7ca236afd4b9-kube-api-access-4vtgc\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.656951 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.676124 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.691916 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.695879 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.706556 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.715580 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.725491 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.736690 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.742817 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.756315 4761 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.758314 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.775138 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.795945 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.801062 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.871124 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.871274 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.871370 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.875475 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.895192 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.916160 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.934602 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.955617 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.975934 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.980304 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l9gzh"] Mar 07 07:51:44 crc kubenswrapper[4761]: W0307 07:51:44.985682 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eb60cea_dfe0_4e7b_896c_8dc4406fbbcf.slice/crio-882cc6d1d8ff9baed06c9225585998a42ca5bcddd90b232ac913143f0ea4ff01 WatchSource:0}: Error finding container 882cc6d1d8ff9baed06c9225585998a42ca5bcddd90b232ac913143f0ea4ff01: Status 404 returned error can't find the container with id 882cc6d1d8ff9baed06c9225585998a42ca5bcddd90b232ac913143f0ea4ff01 Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.994028 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.013459 4761 request.go:700] Waited for 1.901340992s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.014700 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.038662 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk"] Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.052179 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c1ec5eb_b8ac_4fa9_b09d_4f3f01b29873.slice/crio-deacd60a652a993c20ace8797b8ee26b211a0574ffad049e6dc69fc8d870cd0c WatchSource:0}: Error finding container deacd60a652a993c20ace8797b8ee26b211a0574ffad049e6dc69fc8d870cd0c: Status 404 returned error can't find the container with id deacd60a652a993c20ace8797b8ee26b211a0574ffad049e6dc69fc8d870cd0c Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.053801 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khcnt\" (UniqueName: \"kubernetes.io/projected/9b718980-7c2c-4b0f-b605-331928c5a58e-kube-api-access-khcnt\") pod \"control-plane-machine-set-operator-78cbb6b69f-fkrlf\" (UID: \"9b718980-7c2c-4b0f-b605-331928c5a58e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.063615 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5d2nn"] Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.069351 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21e2c5a2_e968_4844_8843_23870b388e6d.slice/crio-85aa246b580d1c61a9b7d0d898416a8e0cd2e35170d5426b941bc8973d1755da WatchSource:0}: Error finding container 85aa246b580d1c61a9b7d0d898416a8e0cd2e35170d5426b941bc8973d1755da: Status 404 returned error can't find the container with id 85aa246b580d1c61a9b7d0d898416a8e0cd2e35170d5426b941bc8973d1755da Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.072413 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkztf\" (UniqueName: \"kubernetes.io/projected/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-kube-api-access-bkztf\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.086307 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svh7l\" (UniqueName: \"kubernetes.io/projected/4d1f4462-4337-4610-9c4b-98bc1f3974e8-kube-api-access-svh7l\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.113219 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.140427 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d1f4462_4337_4610_9c4b_98bc1f3974e8.slice/crio-163791372de3ee385a85f18632a59d616b385c0c51d983c9ea8a45a28ac55aac WatchSource:0}: Error finding container 163791372de3ee385a85f18632a59d616b385c0c51d983c9ea8a45a28ac55aac: Status 404 returned error can't find the container with id 163791372de3ee385a85f18632a59d616b385c0c51d983c9ea8a45a28ac55aac Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175237 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d704dc9c-9c1f-4f45-8438-34eda153e3b5-config\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175277 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-encryption-config\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175301 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6v8lc\" (UID: \"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175340 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-etcd-client\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175360 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-client\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175381 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdfpv\" (UniqueName: \"kubernetes.io/projected/7b1e7bf9-5dc9-4326-b63d-426a716351bc-kube-api-access-wdfpv\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175400 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4znv5\" (UniqueName: \"kubernetes.io/projected/86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a-kube-api-access-4znv5\") pod \"cluster-samples-operator-665b6dd947-6v8lc\" (UID: \"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175440 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99v7b\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-kube-api-access-99v7b\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175459 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-ca\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175479 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175503 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-serving-cert\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175525 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175582 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4333d454-5d55-4214-af24-c1a056088b2f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f57jx\" (UID: \"4333d454-5d55-4214-af24-c1a056088b2f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175643 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckg5q\" (UniqueName: \"kubernetes.io/projected/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-kube-api-access-ckg5q\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175750 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d704dc9c-9c1f-4f45-8438-34eda153e3b5-serving-cert\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175774 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f71cfd24-83ce-4450-8257-8d9d922d018d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175810 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/473ecd8c-4e56-40ac-9444-2d43490c6424-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175831 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dlj8\" (UniqueName: \"kubernetes.io/projected/55412b4c-53c7-4b21-8d7c-87879ef79ed0-kube-api-access-9dlj8\") pod \"downloads-7954f5f757-2lhb8\" (UID: \"55412b4c-53c7-4b21-8d7c-87879ef79ed0\") " pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175853 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwlvk\" (UniqueName: \"kubernetes.io/projected/4333d454-5d55-4214-af24-c1a056088b2f-kube-api-access-mwlvk\") pod \"multus-admission-controller-857f4d67dd-f57jx\" (UID: \"4333d454-5d55-4214-af24-c1a056088b2f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175882 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175903 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-proxy-tls\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175924 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-serving-cert\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175974 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-config\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175998 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-service-ca\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176058 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-config\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176105 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-tls\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176126 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-certificates\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176146 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/687429b1-d68f-4e6e-92f6-24da382d4bfe-serving-cert\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176166 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-trusted-ca\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176185 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-images\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176206 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176226 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-service-ca\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176246 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvjp4\" (UniqueName: \"kubernetes.io/projected/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-kube-api-access-gvjp4\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176293 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d704dc9c-9c1f-4f45-8438-34eda153e3b5-trusted-ca\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176314 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-trusted-ca-bundle\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176347 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-bound-sa-token\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176369 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176389 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-audit-policies\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176408 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/00f287a9-208e-4447-9572-cbe1230c61be-metrics-tls\") pod \"dns-operator-744455d44c-2lv84\" (UID: \"00f287a9-208e-4447-9572-cbe1230c61be\") " pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176428 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176463 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-config\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176483 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176505 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-oauth-serving-cert\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176526 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt98m\" (UniqueName: \"kubernetes.io/projected/f71cfd24-83ce-4450-8257-8d9d922d018d-kube-api-access-xt98m\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176576 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/473ecd8c-4e56-40ac-9444-2d43490c6424-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176598 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/071d5325-8638-4180-aefa-fb07f5533bb2-audit-dir\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176641 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jgxc\" (UniqueName: \"kubernetes.io/projected/d704dc9c-9c1f-4f45-8438-34eda153e3b5-kube-api-access-2jgxc\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176660 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-serving-cert\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176681 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l24rt\" (UniqueName: \"kubernetes.io/projected/687429b1-d68f-4e6e-92f6-24da382d4bfe-kube-api-access-l24rt\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176729 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-oauth-config\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176790 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176812 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92rc6\" (UniqueName: \"kubernetes.io/projected/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-kube-api-access-92rc6\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176855 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176877 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hptmw\" (UniqueName: \"kubernetes.io/projected/071d5325-8638-4180-aefa-fb07f5533bb2-kube-api-access-hptmw\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176899 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71cfd24-83ce-4450-8257-8d9d922d018d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176920 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176965 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgqd8\" (UniqueName: \"kubernetes.io/projected/00f287a9-208e-4447-9572-cbe1230c61be-kube-api-access-zgqd8\") pod \"dns-operator-744455d44c-2lv84\" (UID: \"00f287a9-208e-4447-9572-cbe1230c61be\") " pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.183327 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:45.683309012 +0000 UTC m=+162.592475577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.205306 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xqmxc"] Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.207416 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g5b4l"] Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.208672 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc"] Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.220763 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7d70be0_84a3_4969_bbe9_283e1588343a.slice/crio-3087515dd2ad5f5dc607d1e36712eb032b041470e626d4cd3506a15c82ef0bf9 WatchSource:0}: Error finding container 3087515dd2ad5f5dc607d1e36712eb032b041470e626d4cd3506a15c82ef0bf9: Status 404 returned error can't find the container with id 3087515dd2ad5f5dc607d1e36712eb032b041470e626d4cd3506a15c82ef0bf9 Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.227972 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded7fca9e_1d43_41a2_aef9_567b2b0a2d6f.slice/crio-bb6ae6626dd795e81aac77ed37f761416d9c9519413f017ccbc7c4679f0bbc42 WatchSource:0}: Error finding container bb6ae6626dd795e81aac77ed37f761416d9c9519413f017ccbc7c4679f0bbc42: Status 404 returned error can't find the container with id bb6ae6626dd795e81aac77ed37f761416d9c9519413f017ccbc7c4679f0bbc42 Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.229019 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.244881 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.277693 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278035 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-config\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278062 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-service-ca\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278105 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzb2h\" (UniqueName: \"kubernetes.io/projected/25717bfc-51a4-4724-bbed-70d94a322755-kube-api-access-kzb2h\") pod \"package-server-manager-789f6589d5-52lfx\" (UID: \"25717bfc-51a4-4724-bbed-70d94a322755\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278125 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-config\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278145 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccks2\" (UniqueName: \"kubernetes.io/projected/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-kube-api-access-ccks2\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278184 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctv4g\" (UniqueName: \"kubernetes.io/projected/45228992-9c3e-47bd-a54b-418c9b6183a8-kube-api-access-ctv4g\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278200 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-plugins-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278215 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-trusted-ca\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278231 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278269 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvjp4\" (UniqueName: \"kubernetes.io/projected/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-kube-api-access-gvjp4\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278287 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278334 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d16bf67b-8e20-4f35-bf5c-d7e923919679-config-volume\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278352 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-trusted-ca-bundle\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278366 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-default-certificate\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278381 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-bound-sa-token\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278399 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-audit-policies\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278413 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/00f287a9-208e-4447-9572-cbe1230c61be-metrics-tls\") pod \"dns-operator-744455d44c-2lv84\" (UID: \"00f287a9-208e-4447-9572-cbe1230c61be\") " pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278431 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278457 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-config\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278479 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-oauth-serving-cert\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278503 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69swf\" (UniqueName: \"kubernetes.io/projected/99b83f8b-bc0d-4815-b7ed-26eb344fafac-kube-api-access-69swf\") pod \"ingress-canary-lhr9n\" (UID: \"99b83f8b-bc0d-4815-b7ed-26eb344fafac\") " pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278526 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1ebfd20a-723e-45af-ac08-ed82440f1a8f-trusted-ca\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278551 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/473ecd8c-4e56-40ac-9444-2d43490c6424-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278581 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jgxc\" (UniqueName: \"kubernetes.io/projected/d704dc9c-9c1f-4f45-8438-34eda153e3b5-kube-api-access-2jgxc\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278598 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l24rt\" (UniqueName: \"kubernetes.io/projected/687429b1-d68f-4e6e-92f6-24da382d4bfe-kube-api-access-l24rt\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278614 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278632 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-oauth-config\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278650 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278667 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92rc6\" (UniqueName: \"kubernetes.io/projected/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-kube-api-access-92rc6\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278682 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-srv-cert\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278698 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2t2b\" (UniqueName: \"kubernetes.io/projected/66a6be2c-da25-42c0-a8fa-075b8273bb65-kube-api-access-q2t2b\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278729 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1ebfd20a-723e-45af-ac08-ed82440f1a8f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278752 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/71ec20b6-ead9-496e-bd0d-97702212e64d-tmpfs\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278777 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/237f8811-62cd-4c45-88e1-9a57d376d192-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278806 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278824 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hptmw\" (UniqueName: \"kubernetes.io/projected/071d5325-8638-4180-aefa-fb07f5533bb2-kube-api-access-hptmw\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278841 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-proxy-tls\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278860 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-encryption-config\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278901 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6v8lc\" (UID: \"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278923 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6qnd\" (UniqueName: \"kubernetes.io/projected/03564f71-7198-459e-af21-7c1bdd7d7e03-kube-api-access-p6qnd\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278946 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66a6be2c-da25-42c0-a8fa-075b8273bb65-config-volume\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278965 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-mountpoint-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278986 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78v9h\" (UniqueName: \"kubernetes.io/projected/71ec20b6-ead9-496e-bd0d-97702212e64d-kube-api-access-78v9h\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279009 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-etcd-client\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279032 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-client\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279054 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-ca\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279079 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279116 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71ec20b6-ead9-496e-bd0d-97702212e64d-webhook-cert\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279141 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4333d454-5d55-4214-af24-c1a056088b2f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f57jx\" (UID: \"4333d454-5d55-4214-af24-c1a056088b2f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279161 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/25717bfc-51a4-4724-bbed-70d94a322755-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-52lfx\" (UID: \"25717bfc-51a4-4724-bbed-70d94a322755\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279179 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-socket-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279202 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwcj7\" (UniqueName: \"kubernetes.io/projected/f7a57ac7-fb31-4740-a91c-79947bbdb195-kube-api-access-nwcj7\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279222 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1ebfd20a-723e-45af-ac08-ed82440f1a8f-metrics-tls\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279244 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61255be3-1f4f-4599-8372-c3397004b774-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279277 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-stats-auth\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279327 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d704dc9c-9c1f-4f45-8438-34eda153e3b5-serving-cert\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279349 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/473ecd8c-4e56-40ac-9444-2d43490c6424-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279365 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dlj8\" (UniqueName: \"kubernetes.io/projected/55412b4c-53c7-4b21-8d7c-87879ef79ed0-kube-api-access-9dlj8\") pod \"downloads-7954f5f757-2lhb8\" (UID: \"55412b4c-53c7-4b21-8d7c-87879ef79ed0\") " pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279380 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwlvk\" (UniqueName: \"kubernetes.io/projected/4333d454-5d55-4214-af24-c1a056088b2f-kube-api-access-mwlvk\") pod \"multus-admission-controller-857f4d67dd-f57jx\" (UID: \"4333d454-5d55-4214-af24-c1a056088b2f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279395 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-profile-collector-cert\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279411 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ea66074-912c-4797-b4a5-cfd5b8927d2e-srv-cert\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279426 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d16bf67b-8e20-4f35-bf5c-d7e923919679-metrics-tls\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279440 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61255be3-1f4f-4599-8372-c3397004b774-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279465 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9qrj\" (UniqueName: \"kubernetes.io/projected/934ec594-4040-486a-9df3-7841f5809127-kube-api-access-n9qrj\") pod \"migrator-59844c95c7-62wgv\" (UID: \"934ec594-4040-486a-9df3-7841f5809127\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279498 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85a8c76a-70df-46fe-af69-21b2b58c0ced-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279518 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-tls\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279603 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-certificates\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.280443 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-ca\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.281166 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285025 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-audit-policies\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285547 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/687429b1-d68f-4e6e-92f6-24da382d4bfe-serving-cert\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285582 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7a57ac7-fb31-4740-a91c-79947bbdb195-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285604 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-images\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285622 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-service-ca\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285650 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/45228992-9c3e-47bd-a54b-418c9b6183a8-signing-cabundle\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285677 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9kl6\" (UniqueName: \"kubernetes.io/projected/71700326-fbbb-40ef-a439-3c8feccac4a1-kube-api-access-b9kl6\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285707 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d704dc9c-9c1f-4f45-8438-34eda153e3b5-trusted-ca\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285765 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61255be3-1f4f-4599-8372-c3397004b774-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285794 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.286004 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/473ecd8c-4e56-40ac-9444-2d43490c6424-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.286403 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldbfv\" (UniqueName: \"kubernetes.io/projected/237f8811-62cd-4c45-88e1-9a57d376d192-kube-api-access-ldbfv\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.286480 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.286550 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt98m\" (UniqueName: \"kubernetes.io/projected/f71cfd24-83ce-4450-8257-8d9d922d018d-kube-api-access-xt98m\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.286584 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034dd126-5e75-4772-9464-5ccfdaa0f447-config\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.286634 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f7a57ac7-fb31-4740-a91c-79947bbdb195-ready\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.286670 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/071d5325-8638-4180-aefa-fb07f5533bb2-audit-dir\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.287366 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/071d5325-8638-4180-aefa-fb07f5533bb2-audit-dir\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.287673 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bws6p\" (UniqueName: \"kubernetes.io/projected/69f8f788-a780-4cf1-9ef7-397428d61593-kube-api-access-bws6p\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.287707 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m82xk\" (UniqueName: \"kubernetes.io/projected/0ea66074-912c-4797-b4a5-cfd5b8927d2e-kube-api-access-m82xk\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.287776 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-serving-cert\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.287958 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:45.787938197 +0000 UTC m=+162.697104772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.287998 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99b83f8b-bc0d-4815-b7ed-26eb344fafac-cert\") pod \"ingress-canary-lhr9n\" (UID: \"99b83f8b-bc0d-4815-b7ed-26eb344fafac\") " pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.288235 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71ec20b6-ead9-496e-bd0d-97702212e64d-apiservice-cert\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.289133 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71cfd24-83ce-4450-8257-8d9d922d018d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.289541 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/473ecd8c-4e56-40ac-9444-2d43490c6424-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.289847 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-config\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.289864 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71cfd24-83ce-4450-8257-8d9d922d018d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.289912 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.289958 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a8c76a-70df-46fe-af69-21b2b58c0ced-config\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.289994 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4333d454-5d55-4214-af24-c1a056088b2f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f57jx\" (UID: \"4333d454-5d55-4214-af24-c1a056088b2f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.290326 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.290341 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgqd8\" (UniqueName: \"kubernetes.io/projected/00f287a9-208e-4447-9572-cbe1230c61be-kube-api-access-zgqd8\") pod \"dns-operator-744455d44c-2lv84\" (UID: \"00f287a9-208e-4447-9572-cbe1230c61be\") " pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.290451 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-registration-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291113 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291415 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d704dc9c-9c1f-4f45-8438-34eda153e3b5-config\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291478 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/71700326-fbbb-40ef-a439-3c8feccac4a1-certs\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291541 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-encryption-config\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291573 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-config\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291580 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-service-ca\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291643 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdfpv\" (UniqueName: \"kubernetes.io/projected/7b1e7bf9-5dc9-4326-b63d-426a716351bc-kube-api-access-wdfpv\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291702 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4znv5\" (UniqueName: \"kubernetes.io/projected/86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a-kube-api-access-4znv5\") pod \"cluster-samples-operator-665b6dd947-6v8lc\" (UID: \"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291755 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4b9n\" (UniqueName: \"kubernetes.io/projected/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-kube-api-access-t4b9n\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291778 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-oauth-config\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291814 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s55r9\" (UniqueName: \"kubernetes.io/projected/1ebfd20a-723e-45af-ac08-ed82440f1a8f-kube-api-access-s55r9\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291997 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-csi-data-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292064 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99v7b\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-kube-api-access-99v7b\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292114 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-metrics-certs\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292194 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-serving-cert\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292229 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292248 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/45228992-9c3e-47bd-a54b-418c9b6183a8-signing-key\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292457 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-images\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292575 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292629 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ea66074-912c-4797-b4a5-cfd5b8927d2e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292852 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a8c76a-70df-46fe-af69-21b2b58c0ced-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292927 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66a6be2c-da25-42c0-a8fa-075b8273bb65-secret-volume\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293055 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkx9j\" (UniqueName: \"kubernetes.io/projected/d16bf67b-8e20-4f35-bf5c-d7e923919679-kube-api-access-nkx9j\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293064 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-trusted-ca-bundle\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293089 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckg5q\" (UniqueName: \"kubernetes.io/projected/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-kube-api-access-ckg5q\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293052 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-certificates\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293248 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03564f71-7198-459e-af21-7c1bdd7d7e03-service-ca-bundle\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293333 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7a57ac7-fb31-4740-a91c-79947bbdb195-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293378 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/71700326-fbbb-40ef-a439-3c8feccac4a1-node-bootstrap-token\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293405 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbzkt\" (UniqueName: \"kubernetes.io/projected/0013064e-ed56-415d-b236-1c92e98194d5-kube-api-access-qbzkt\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293450 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f71cfd24-83ce-4450-8257-8d9d922d018d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293574 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034dd126-5e75-4772-9464-5ccfdaa0f447-serving-cert\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293845 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbpwf\" (UniqueName: \"kubernetes.io/projected/034dd126-5e75-4772-9464-5ccfdaa0f447-kube-api-access-mbpwf\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293943 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293968 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-proxy-tls\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.294063 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/237f8811-62cd-4c45-88e1-9a57d376d192-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.294095 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-serving-cert\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.294412 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-trusted-ca\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.295063 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/687429b1-d68f-4e6e-92f6-24da382d4bfe-serving-cert\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.295510 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.295566 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6v8lc\" (UID: \"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.295588 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:45.795571884 +0000 UTC m=+162.704738359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.296096 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d704dc9c-9c1f-4f45-8438-34eda153e3b5-config\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.296595 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-tls\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.298992 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-service-ca\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.299234 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.299691 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-proxy-tls\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.299760 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d704dc9c-9c1f-4f45-8438-34eda153e3b5-trusted-ca\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.300104 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f71cfd24-83ce-4450-8257-8d9d922d018d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.300310 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-serving-cert\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.301329 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d704dc9c-9c1f-4f45-8438-34eda153e3b5-serving-cert\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.302095 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.302187 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-config\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.302559 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.303001 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-etcd-client\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.305367 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-client\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.308915 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-serving-cert\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.310015 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-oauth-serving-cert\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.312340 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/00f287a9-208e-4447-9572-cbe1230c61be-metrics-tls\") pod \"dns-operator-744455d44c-2lv84\" (UID: \"00f287a9-208e-4447-9572-cbe1230c61be\") " pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.312956 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.317507 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-serving-cert\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.346695 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hptmw\" (UniqueName: \"kubernetes.io/projected/071d5325-8638-4180-aefa-fb07f5533bb2-kube-api-access-hptmw\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.367329 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l24rt\" (UniqueName: \"kubernetes.io/projected/687429b1-d68f-4e6e-92f6-24da382d4bfe-kube-api-access-l24rt\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.379049 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-bound-sa-token\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394521 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394671 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69swf\" (UniqueName: \"kubernetes.io/projected/99b83f8b-bc0d-4815-b7ed-26eb344fafac-kube-api-access-69swf\") pod \"ingress-canary-lhr9n\" (UID: \"99b83f8b-bc0d-4815-b7ed-26eb344fafac\") " pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394691 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1ebfd20a-723e-45af-ac08-ed82440f1a8f-trusted-ca\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394777 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394804 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-srv-cert\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394822 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2t2b\" (UniqueName: \"kubernetes.io/projected/66a6be2c-da25-42c0-a8fa-075b8273bb65-kube-api-access-q2t2b\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394837 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1ebfd20a-723e-45af-ac08-ed82440f1a8f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394851 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/71ec20b6-ead9-496e-bd0d-97702212e64d-tmpfs\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394871 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/237f8811-62cd-4c45-88e1-9a57d376d192-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394888 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-proxy-tls\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394904 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78v9h\" (UniqueName: \"kubernetes.io/projected/71ec20b6-ead9-496e-bd0d-97702212e64d-kube-api-access-78v9h\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394921 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6qnd\" (UniqueName: \"kubernetes.io/projected/03564f71-7198-459e-af21-7c1bdd7d7e03-kube-api-access-p6qnd\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394936 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66a6be2c-da25-42c0-a8fa-075b8273bb65-config-volume\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394951 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-mountpoint-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394966 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71ec20b6-ead9-496e-bd0d-97702212e64d-webhook-cert\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394986 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/25717bfc-51a4-4724-bbed-70d94a322755-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-52lfx\" (UID: \"25717bfc-51a4-4724-bbed-70d94a322755\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395002 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-socket-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395020 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwcj7\" (UniqueName: \"kubernetes.io/projected/f7a57ac7-fb31-4740-a91c-79947bbdb195-kube-api-access-nwcj7\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395033 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1ebfd20a-723e-45af-ac08-ed82440f1a8f-metrics-tls\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395050 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61255be3-1f4f-4599-8372-c3397004b774-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395075 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-stats-auth\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395102 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-profile-collector-cert\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395118 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ea66074-912c-4797-b4a5-cfd5b8927d2e-srv-cert\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395134 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d16bf67b-8e20-4f35-bf5c-d7e923919679-metrics-tls\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395150 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61255be3-1f4f-4599-8372-c3397004b774-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395167 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9qrj\" (UniqueName: \"kubernetes.io/projected/934ec594-4040-486a-9df3-7841f5809127-kube-api-access-n9qrj\") pod \"migrator-59844c95c7-62wgv\" (UID: \"934ec594-4040-486a-9df3-7841f5809127\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395183 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85a8c76a-70df-46fe-af69-21b2b58c0ced-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395199 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7a57ac7-fb31-4740-a91c-79947bbdb195-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395217 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/45228992-9c3e-47bd-a54b-418c9b6183a8-signing-cabundle\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395233 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9kl6\" (UniqueName: \"kubernetes.io/projected/71700326-fbbb-40ef-a439-3c8feccac4a1-kube-api-access-b9kl6\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395247 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61255be3-1f4f-4599-8372-c3397004b774-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395263 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldbfv\" (UniqueName: \"kubernetes.io/projected/237f8811-62cd-4c45-88e1-9a57d376d192-kube-api-access-ldbfv\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395290 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034dd126-5e75-4772-9464-5ccfdaa0f447-config\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395304 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f7a57ac7-fb31-4740-a91c-79947bbdb195-ready\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395318 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m82xk\" (UniqueName: \"kubernetes.io/projected/0ea66074-912c-4797-b4a5-cfd5b8927d2e-kube-api-access-m82xk\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395335 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bws6p\" (UniqueName: \"kubernetes.io/projected/69f8f788-a780-4cf1-9ef7-397428d61593-kube-api-access-bws6p\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395358 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99b83f8b-bc0d-4815-b7ed-26eb344fafac-cert\") pod \"ingress-canary-lhr9n\" (UID: \"99b83f8b-bc0d-4815-b7ed-26eb344fafac\") " pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395373 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71ec20b6-ead9-496e-bd0d-97702212e64d-apiservice-cert\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395396 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a8c76a-70df-46fe-af69-21b2b58c0ced-config\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395421 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-registration-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395439 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/71700326-fbbb-40ef-a439-3c8feccac4a1-certs\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395453 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-csi-data-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4b9n\" (UniqueName: \"kubernetes.io/projected/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-kube-api-access-t4b9n\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395493 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s55r9\" (UniqueName: \"kubernetes.io/projected/1ebfd20a-723e-45af-ac08-ed82440f1a8f-kube-api-access-s55r9\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395512 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-metrics-certs\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395527 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/45228992-9c3e-47bd-a54b-418c9b6183a8-signing-key\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395543 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395557 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ea66074-912c-4797-b4a5-cfd5b8927d2e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395573 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a8c76a-70df-46fe-af69-21b2b58c0ced-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395587 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66a6be2c-da25-42c0-a8fa-075b8273bb65-secret-volume\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395601 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkx9j\" (UniqueName: \"kubernetes.io/projected/d16bf67b-8e20-4f35-bf5c-d7e923919679-kube-api-access-nkx9j\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395620 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03564f71-7198-459e-af21-7c1bdd7d7e03-service-ca-bundle\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395636 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7a57ac7-fb31-4740-a91c-79947bbdb195-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395651 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/71700326-fbbb-40ef-a439-3c8feccac4a1-node-bootstrap-token\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395667 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbzkt\" (UniqueName: \"kubernetes.io/projected/0013064e-ed56-415d-b236-1c92e98194d5-kube-api-access-qbzkt\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395686 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034dd126-5e75-4772-9464-5ccfdaa0f447-serving-cert\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395752 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbpwf\" (UniqueName: \"kubernetes.io/projected/034dd126-5e75-4772-9464-5ccfdaa0f447-kube-api-access-mbpwf\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395776 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/237f8811-62cd-4c45-88e1-9a57d376d192-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.395799 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:45.895782292 +0000 UTC m=+162.804948767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395818 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzb2h\" (UniqueName: \"kubernetes.io/projected/25717bfc-51a4-4724-bbed-70d94a322755-kube-api-access-kzb2h\") pod \"package-server-manager-789f6589d5-52lfx\" (UID: \"25717bfc-51a4-4724-bbed-70d94a322755\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395847 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccks2\" (UniqueName: \"kubernetes.io/projected/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-kube-api-access-ccks2\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395885 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctv4g\" (UniqueName: \"kubernetes.io/projected/45228992-9c3e-47bd-a54b-418c9b6183a8-kube-api-access-ctv4g\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395900 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-plugins-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395923 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395960 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d16bf67b-8e20-4f35-bf5c-d7e923919679-config-volume\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395980 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-default-certificate\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.396023 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/71ec20b6-ead9-496e-bd0d-97702212e64d-tmpfs\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.396365 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/237f8811-62cd-4c45-88e1-9a57d376d192-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.396778 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1ebfd20a-723e-45af-ac08-ed82440f1a8f-trusted-ca\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.397056 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.397576 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034dd126-5e75-4772-9464-5ccfdaa0f447-config\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.398490 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61255be3-1f4f-4599-8372-c3397004b774-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.400178 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66a6be2c-da25-42c0-a8fa-075b8273bb65-config-volume\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.400240 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-mountpoint-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.400271 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-proxy-tls\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.401244 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-socket-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.401277 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f7a57ac7-fb31-4740-a91c-79947bbdb195-ready\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.401284 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d16bf67b-8e20-4f35-bf5c-d7e923919679-metrics-tls\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.401427 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7a57ac7-fb31-4740-a91c-79947bbdb195-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.401525 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jgxc\" (UniqueName: \"kubernetes.io/projected/d704dc9c-9c1f-4f45-8438-34eda153e3b5-kube-api-access-2jgxc\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.401605 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-csi-data-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.401649 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-registration-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.402533 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a8c76a-70df-46fe-af69-21b2b58c0ced-config\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.402927 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.403129 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/25717bfc-51a4-4724-bbed-70d94a322755-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-52lfx\" (UID: \"25717bfc-51a4-4724-bbed-70d94a322755\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.404121 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-profile-collector-cert\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.404408 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-plugins-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.404638 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/45228992-9c3e-47bd-a54b-418c9b6183a8-signing-cabundle\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.405371 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d16bf67b-8e20-4f35-bf5c-d7e923919679-config-volume\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.405850 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ea66074-912c-4797-b4a5-cfd5b8927d2e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.406427 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7a57ac7-fb31-4740-a91c-79947bbdb195-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.407508 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-stats-auth\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.407708 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61255be3-1f4f-4599-8372-c3397004b774-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.407886 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71ec20b6-ead9-496e-bd0d-97702212e64d-apiservice-cert\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.407966 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71ec20b6-ead9-496e-bd0d-97702212e64d-webhook-cert\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.408535 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99b83f8b-bc0d-4815-b7ed-26eb344fafac-cert\") pod \"ingress-canary-lhr9n\" (UID: \"99b83f8b-bc0d-4815-b7ed-26eb344fafac\") " pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.409023 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034dd126-5e75-4772-9464-5ccfdaa0f447-serving-cert\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.409158 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.409378 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-srv-cert\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.409908 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ea66074-912c-4797-b4a5-cfd5b8927d2e-srv-cert\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.410108 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/45228992-9c3e-47bd-a54b-418c9b6183a8-signing-key\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.410317 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1ebfd20a-723e-45af-ac08-ed82440f1a8f-metrics-tls\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.411221 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-default-certificate\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.411286 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/237f8811-62cd-4c45-88e1-9a57d376d192-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.411730 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-metrics-certs\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.412525 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92rc6\" (UniqueName: \"kubernetes.io/projected/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-kube-api-access-92rc6\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.412689 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a8c76a-70df-46fe-af69-21b2b58c0ced-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.413690 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03564f71-7198-459e-af21-7c1bdd7d7e03-service-ca-bundle\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.413842 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/71700326-fbbb-40ef-a439-3c8feccac4a1-certs\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.414024 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/71700326-fbbb-40ef-a439-3c8feccac4a1-node-bootstrap-token\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.415339 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66a6be2c-da25-42c0-a8fa-075b8273bb65-secret-volume\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.426049 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.430643 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dlj8\" (UniqueName: \"kubernetes.io/projected/55412b4c-53c7-4b21-8d7c-87879ef79ed0-kube-api-access-9dlj8\") pod \"downloads-7954f5f757-2lhb8\" (UID: \"55412b4c-53c7-4b21-8d7c-87879ef79ed0\") " pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.445664 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.456152 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwlvk\" (UniqueName: \"kubernetes.io/projected/4333d454-5d55-4214-af24-c1a056088b2f-kube-api-access-mwlvk\") pod \"multus-admission-controller-857f4d67dd-f57jx\" (UID: \"4333d454-5d55-4214-af24-c1a056088b2f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.477153 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.481169 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9vsj5"] Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.484672 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.492944 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvjp4\" (UniqueName: \"kubernetes.io/projected/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-kube-api-access-gvjp4\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.497559 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.497894 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:45.997883825 +0000 UTC m=+162.907050300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.498015 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.504876 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.508519 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf"] Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.513081 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt98m\" (UniqueName: \"kubernetes.io/projected/f71cfd24-83ce-4450-8257-8d9d922d018d-kube-api-access-xt98m\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.518666 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0868ef7f_3f74_41e3_bc81_8cf20dc88c43.slice/crio-9f0aea4666bef403ee0c79d291bb8d9215e17783ac81b07338fb6e17d158507c WatchSource:0}: Error finding container 9f0aea4666bef403ee0c79d291bb8d9215e17783ac81b07338fb6e17d158507c: Status 404 returned error can't find the container with id 9f0aea4666bef403ee0c79d291bb8d9215e17783ac81b07338fb6e17d158507c Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.533680 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgqd8\" (UniqueName: \"kubernetes.io/projected/00f287a9-208e-4447-9572-cbe1230c61be-kube-api-access-zgqd8\") pod \"dns-operator-744455d44c-2lv84\" (UID: \"00f287a9-208e-4447-9572-cbe1230c61be\") " pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.548356 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.568262 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.572397 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4znv5\" (UniqueName: \"kubernetes.io/projected/86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a-kube-api-access-4znv5\") pod \"cluster-samples-operator-665b6dd947-6v8lc\" (UID: \"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.572687 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.590140 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdfpv\" (UniqueName: \"kubernetes.io/projected/7b1e7bf9-5dc9-4326-b63d-426a716351bc-kube-api-access-wdfpv\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.598509 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.599121 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.099097158 +0000 UTC m=+163.008263633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.612764 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" event={"ID":"21e2c5a2-e968-4844-8843-23870b388e6d","Type":"ContainerStarted","Data":"407375ec00dc04252023445b62731194fbfb32d50af19f9b516072fe3a71402b"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.612808 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" event={"ID":"21e2c5a2-e968-4844-8843-23870b388e6d","Type":"ContainerStarted","Data":"85aa246b580d1c61a9b7d0d898416a8e0cd2e35170d5426b941bc8973d1755da"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.613236 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.616026 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99v7b\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-kube-api-access-99v7b\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.624243 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" event={"ID":"828a167b-cf1b-433c-844a-7ca236afd4b9","Type":"ContainerStarted","Data":"aee63f6172ca58137022a63b00d7159d2b5bf6ceee7513dc19f18e01b5d2c5aa"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.624511 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" event={"ID":"828a167b-cf1b-433c-844a-7ca236afd4b9","Type":"ContainerStarted","Data":"fcabf6ad23134226a132468bdcafa476bfdd6b463a8ba6ac9b8637be510e46d2"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.624526 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" event={"ID":"828a167b-cf1b-433c-844a-7ca236afd4b9","Type":"ContainerStarted","Data":"e4666082b2f5ad7ccfc421b84a8f8496eb656d88fbc48a1dd73608cdeb50161c"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.624455 4761 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5d2nn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.624572 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" podUID="21e2c5a2-e968-4844-8843-23870b388e6d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.629621 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckg5q\" (UniqueName: \"kubernetes.io/projected/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-kube-api-access-ckg5q\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.637896 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" event={"ID":"4d1f4462-4337-4610-9c4b-98bc1f3974e8","Type":"ContainerStarted","Data":"6f62325ad17e8853a6e7585461ffc43cc40ff21577739c486aad4b95c9d6b28a"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.637938 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" event={"ID":"4d1f4462-4337-4610-9c4b-98bc1f3974e8","Type":"ContainerStarted","Data":"163791372de3ee385a85f18632a59d616b385c0c51d983c9ea8a45a28ac55aac"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.642385 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" event={"ID":"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873","Type":"ContainerStarted","Data":"461fb8a37b8666f2e5aefdf9816e32c6adfbb7896af07246943c80dbb3508b66"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.642450 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" event={"ID":"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873","Type":"ContainerStarted","Data":"deacd60a652a993c20ace8797b8ee26b211a0574ffad049e6dc69fc8d870cd0c"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.645747 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" event={"ID":"0868ef7f-3f74-41e3-bc81-8cf20dc88c43","Type":"ContainerStarted","Data":"9f0aea4666bef403ee0c79d291bb8d9215e17783ac81b07338fb6e17d158507c"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.649309 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" event={"ID":"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf","Type":"ContainerStarted","Data":"90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.649342 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" event={"ID":"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf","Type":"ContainerStarted","Data":"882cc6d1d8ff9baed06c9225585998a42ca5bcddd90b232ac913143f0ea4ff01"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.649861 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.650833 4761 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l9gzh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.650876 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" podUID="6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.652299 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" event={"ID":"9b718980-7c2c-4b0f-b605-331928c5a58e","Type":"ContainerStarted","Data":"c88027eb29b65aaa4b7b5cdbab7445a53ba6681b6eb098a92db1fe80fca6c823"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.661832 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" event={"ID":"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f","Type":"ContainerStarted","Data":"db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.661873 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" event={"ID":"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f","Type":"ContainerStarted","Data":"bb6ae6626dd795e81aac77ed37f761416d9c9519413f017ccbc7c4679f0bbc42"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.662139 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.665076 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1ebfd20a-723e-45af-ac08-ed82440f1a8f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.666078 4761 generic.go:334] "Generic (PLEG): container finished" podID="f7d70be0-84a3-4969-bbe9-283e1588343a" containerID="87141c4cded6a03f6d3953c611d2bfc09ec72bf645d2067a4af78a1f7173b761" exitCode=0 Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.666251 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" event={"ID":"f7d70be0-84a3-4969-bbe9-283e1588343a","Type":"ContainerDied","Data":"87141c4cded6a03f6d3953c611d2bfc09ec72bf645d2067a4af78a1f7173b761"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.666320 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" event={"ID":"f7d70be0-84a3-4969-bbe9-283e1588343a","Type":"ContainerStarted","Data":"3087515dd2ad5f5dc607d1e36712eb032b041470e626d4cd3506a15c82ef0bf9"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.674130 4761 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7r7nc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.674162 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" podUID="ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.674796 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6qsbw"] Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.687148 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2t2b\" (UniqueName: \"kubernetes.io/projected/66a6be2c-da25-42c0-a8fa-075b8273bb65-kube-api-access-q2t2b\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.693020 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw"] Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.696107 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69swf\" (UniqueName: \"kubernetes.io/projected/99b83f8b-bc0d-4815-b7ed-26eb344fafac-kube-api-access-69swf\") pod \"ingress-canary-lhr9n\" (UID: \"99b83f8b-bc0d-4815-b7ed-26eb344fafac\") " pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.703571 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.705133 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.205120757 +0000 UTC m=+163.114287232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.712422 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.717384 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9kl6\" (UniqueName: \"kubernetes.io/projected/71700326-fbbb-40ef-a439-3c8feccac4a1-kube-api-access-b9kl6\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.729348 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldbfv\" (UniqueName: \"kubernetes.io/projected/237f8811-62cd-4c45-88e1-9a57d376d192-kube-api-access-ldbfv\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.735084 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.745462 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd704dc9c_9c1f_4f45_8438_34eda153e3b5.slice/crio-1f86d7eebeb2a363d27b104e6f5603d75b440e59e5ec565c1016d67a4f59c7a1 WatchSource:0}: Error finding container 1f86d7eebeb2a363d27b104e6f5603d75b440e59e5ec565c1016d67a4f59c7a1: Status 404 returned error can't find the container with id 1f86d7eebeb2a363d27b104e6f5603d75b440e59e5ec565c1016d67a4f59c7a1 Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.753163 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78v9h\" (UniqueName: \"kubernetes.io/projected/71ec20b6-ead9-496e-bd0d-97702212e64d-kube-api-access-78v9h\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.773744 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6qnd\" (UniqueName: \"kubernetes.io/projected/03564f71-7198-459e-af21-7c1bdd7d7e03-kube-api-access-p6qnd\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.792021 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.793067 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85a8c76a-70df-46fe-af69-21b2b58c0ced-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.805749 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.806957 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.306918834 +0000 UTC m=+163.216085319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.814846 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.815350 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61255be3-1f4f-4599-8372-c3397004b774-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.818157 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.823770 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.837398 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9qrj\" (UniqueName: \"kubernetes.io/projected/934ec594-4040-486a-9df3-7841f5809127-kube-api-access-n9qrj\") pod \"migrator-59844c95c7-62wgv\" (UID: \"934ec594-4040-486a-9df3-7841f5809127\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.856893 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwcj7\" (UniqueName: \"kubernetes.io/projected/f7a57ac7-fb31-4740-a91c-79947bbdb195-kube-api-access-nwcj7\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.873385 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkx9j\" (UniqueName: \"kubernetes.io/projected/d16bf67b-8e20-4f35-bf5c-d7e923919679-kube-api-access-nkx9j\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.885421 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.890642 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bws6p\" (UniqueName: \"kubernetes.io/projected/69f8f788-a780-4cf1-9ef7-397428d61593-kube-api-access-bws6p\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.907834 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.908171 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.408158247 +0000 UTC m=+163.317324722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.910276 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m82xk\" (UniqueName: \"kubernetes.io/projected/0ea66074-912c-4797-b4a5-cfd5b8927d2e-kube-api-access-m82xk\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.910688 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03564f71_7198_459e_af21_7c1bdd7d7e03.slice/crio-9fdbcbcb69bfc357988081ee6f93b373c9a2f52d0d99e03c96ce02610322aea9 WatchSource:0}: Error finding container 9fdbcbcb69bfc357988081ee6f93b373c9a2f52d0d99e03c96ce02610322aea9: Status 404 returned error can't find the container with id 9fdbcbcb69bfc357988081ee6f93b373c9a2f52d0d99e03c96ce02610322aea9 Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.912650 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.918431 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.923906 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.939060 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4b9n\" (UniqueName: \"kubernetes.io/projected/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-kube-api-access-t4b9n\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.952104 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.952137 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.959672 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.960399 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzb2h\" (UniqueName: \"kubernetes.io/projected/25717bfc-51a4-4724-bbed-70d94a322755-kube-api-access-kzb2h\") pod \"package-server-manager-789f6589d5-52lfx\" (UID: \"25717bfc-51a4-4724-bbed-70d94a322755\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.963988 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.972153 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.973247 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccks2\" (UniqueName: \"kubernetes.io/projected/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-kube-api-access-ccks2\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.981031 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.005198 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.005754 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctv4g\" (UniqueName: \"kubernetes.io/projected/45228992-9c3e-47bd-a54b-418c9b6183a8-kube-api-access-ctv4g\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.011218 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.011567 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.511551041 +0000 UTC m=+163.420717516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.013727 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbzkt\" (UniqueName: \"kubernetes.io/projected/0013064e-ed56-415d-b236-1c92e98194d5-kube-api-access-qbzkt\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.018705 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.026486 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zjd48"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.037696 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbpwf\" (UniqueName: \"kubernetes.io/projected/034dd126-5e75-4772-9464-5ccfdaa0f447-kube-api-access-mbpwf\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.061505 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s55r9\" (UniqueName: \"kubernetes.io/projected/1ebfd20a-723e-45af-ac08-ed82440f1a8f-kube-api-access-s55r9\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.113565 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.114046 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.614034334 +0000 UTC m=+163.523200809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.115132 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.120282 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-n8d4g"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.179511 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.193029 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.199386 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.204770 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.214531 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.214818 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.714801906 +0000 UTC m=+163.623968381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.232261 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.237960 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.240584 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.265851 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2lhb8"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.266367 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-f57jx"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.294762 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.316561 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.317167 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.817151966 +0000 UTC m=+163.726318441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.381479 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.396942 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lhr9n"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.417477 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.417929 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.917911697 +0000 UTC m=+163.827078172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.523618 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.524528 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.024504921 +0000 UTC m=+163.933671406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.629599 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.630564 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.130547181 +0000 UTC m=+164.039713656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.659971 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" podStartSLOduration=86.659955609 podStartE2EDuration="1m26.659955609s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:46.658836982 +0000 UTC m=+163.568003457" watchObservedRunningTime="2026-03-07 07:51:46.659955609 +0000 UTC m=+163.569122084" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.732926 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.733213 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.233202518 +0000 UTC m=+164.142368993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.758826 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" event={"ID":"4d1f4462-4337-4610-9c4b-98bc1f3974e8","Type":"ContainerStarted","Data":"11873eb4871d6588639892cc74358c925e3767791eb01fbdfcd43df8662884fb"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.782851 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" event={"ID":"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4","Type":"ContainerStarted","Data":"881a0328a28bd2b63f970775c3573cf3cd8f872d34a76448c74566cd690adb76"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.792012 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-82d2w" event={"ID":"71700326-fbbb-40ef-a439-3c8feccac4a1","Type":"ContainerStarted","Data":"32dfe53e3aaa2ba3010a54f61ff0ed6b89b4f4117261b1b3f49b109d5d55d56b"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.801274 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lhr9n" event={"ID":"99b83f8b-bc0d-4815-b7ed-26eb344fafac","Type":"ContainerStarted","Data":"29c7726a0c84d6ed97efefd6ba62c4bc4beca5e1f50553c4f00b1a80b776d7ed"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.803768 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fsrlc"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.835235 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.836050 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.33603532 +0000 UTC m=+164.245201795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.858851 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.882888 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.911060 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k4zfw"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.912161 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" event={"ID":"0868ef7f-3f74-41e3-bc81-8cf20dc88c43","Type":"ContainerStarted","Data":"3aae9d1949f29f8a7ae6aa2ba7150cd8e12626138a303387879fde766ec3acca"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.919523 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2lv84"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.924204 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" event={"ID":"4333d454-5d55-4214-af24-c1a056088b2f","Type":"ContainerStarted","Data":"488e550b606dd8bcfdf3d5297d67330495a6480028d9806cfb62a2c2ba58bfc6"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.937206 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.937466 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.437456237 +0000 UTC m=+164.346622712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.937924 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.941793 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" event={"ID":"9543f0f5-dfe9-4443-816d-a6a8c4fbb012","Type":"ContainerStarted","Data":"5e4a276b07e6f1ba374d21bda1508d0eca3bbac93dec885c236c4e0ac917d175"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.945974 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" event={"ID":"f7a57ac7-fb31-4740-a91c-79947bbdb195","Type":"ContainerStarted","Data":"10fffd5195b9e393f3834032440a56b1e21df8b250bef07886ea2a129d17fa61"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.947373 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" event={"ID":"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0","Type":"ContainerStarted","Data":"2ddde3d63a6381bff1b929e20dc87567f3408e832e93906dd3455a928bcc78f6"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.959063 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.970755 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" event={"ID":"071d5325-8638-4180-aefa-fb07f5533bb2","Type":"ContainerStarted","Data":"4a92ce3a9acd5388642d2bd5c49467e1ce4087acc0c36cc692bae669f7da3618"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.975328 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2lhb8" event={"ID":"55412b4c-53c7-4b21-8d7c-87879ef79ed0","Type":"ContainerStarted","Data":"98376353f074ccaf7203b8ad4c112af6cb414597c01b20941b3f78ab5f8ea9a8"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.983305 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" event={"ID":"9b718980-7c2c-4b0f-b605-331928c5a58e","Type":"ContainerStarted","Data":"22b55522449e506862c2584c1669afb72889d2b2a3e9a82b85c201289635386c"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.998079 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" event={"ID":"f7d70be0-84a3-4969-bbe9-283e1588343a","Type":"ContainerStarted","Data":"9bc0fa671a229278aaa23b3db1873096da5e7fab876ce858696dd3f8753f457a"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.998552 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" podStartSLOduration=86.998538619 podStartE2EDuration="1m26.998538619s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:46.994751097 +0000 UTC m=+163.903917582" watchObservedRunningTime="2026-03-07 07:51:46.998538619 +0000 UTC m=+163.907705094" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.007706 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" event={"ID":"d704dc9c-9c1f-4f45-8438-34eda153e3b5","Type":"ContainerStarted","Data":"d2ae8f588889d280d8dd782663b71192c29cd20c81435bd1b5054bad8dc9b285"} Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.007806 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" event={"ID":"d704dc9c-9c1f-4f45-8438-34eda153e3b5","Type":"ContainerStarted","Data":"1f86d7eebeb2a363d27b104e6f5603d75b440e59e5ec565c1016d67a4f59c7a1"} Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.008439 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.009690 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.009747 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.037579 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.038299 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.53827476 +0000 UTC m=+164.447441265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.045814 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" event={"ID":"687429b1-d68f-4e6e-92f6-24da382d4bfe","Type":"ContainerStarted","Data":"e3a442c271053bf6d43fe0603e2968cc23bd10144c68bf64775413ede5d7475d"} Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.052684 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8vzkp" event={"ID":"03564f71-7198-459e-af21-7c1bdd7d7e03","Type":"ContainerStarted","Data":"077141596c963bf9bec89b1e3ff3e264d61633dd3ab021be177dcdc0e9b543c9"} Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.052744 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8vzkp" event={"ID":"03564f71-7198-459e-af21-7c1bdd7d7e03","Type":"ContainerStarted","Data":"9fdbcbcb69bfc357988081ee6f93b373c9a2f52d0d99e03c96ce02610322aea9"} Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.063148 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.073939 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.143790 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.153377 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.653345381 +0000 UTC m=+164.562511856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.246491 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.247971 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.747949181 +0000 UTC m=+164.657115656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.329374 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" podStartSLOduration=87.32935789 podStartE2EDuration="1m27.32935789s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:47.328783796 +0000 UTC m=+164.237950291" watchObservedRunningTime="2026-03-07 07:51:47.32935789 +0000 UTC m=+164.238524365" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.348177 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.348495 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.848483817 +0000 UTC m=+164.757650292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.366584 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" podStartSLOduration=87.366565499 podStartE2EDuration="1m27.366565499s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:47.365599985 +0000 UTC m=+164.274766460" watchObservedRunningTime="2026-03-07 07:51:47.366565499 +0000 UTC m=+164.275731974" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.459082 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.459652 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.959635452 +0000 UTC m=+164.868801927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.461814 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.473978 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.475807 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.485246 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.489348 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.561325 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.561675 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.061664414 +0000 UTC m=+164.970830889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: W0307 07:51:47.655920 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71ec20b6_ead9_496e_bd0d_97702212e64d.slice/crio-369c5994ce7c4789b8e6e2c88193729c2332c42f27270d322725e5f685dc2e2f WatchSource:0}: Error finding container 369c5994ce7c4789b8e6e2c88193729c2332c42f27270d322725e5f685dc2e2f: Status 404 returned error can't find the container with id 369c5994ce7c4789b8e6e2c88193729c2332c42f27270d322725e5f685dc2e2f Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.669197 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.669614 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.16959352 +0000 UTC m=+165.078759995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.692224 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.776011 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" podStartSLOduration=87.775993779 podStartE2EDuration="1m27.775993779s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:47.754631658 +0000 UTC m=+164.663798133" watchObservedRunningTime="2026-03-07 07:51:47.775993779 +0000 UTC m=+164.685160254" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.777191 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5r998"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.782904 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.783254 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.283238166 +0000 UTC m=+165.192404641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.796334 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.800685 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.846192 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wj76l"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.849991 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.886252 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.886985 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.38697144 +0000 UTC m=+165.296137915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.890897 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:47 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:47 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:47 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.890933 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.903017 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.912818 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9475l"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.932586 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.988525 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.988877 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.488852809 +0000 UTC m=+165.398019284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.993235 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c9px5"] Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.063489 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" podStartSLOduration=88.063281537 podStartE2EDuration="1m28.063281537s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:48.041559886 +0000 UTC m=+164.950726361" watchObservedRunningTime="2026-03-07 07:51:48.063281537 +0000 UTC m=+164.972448012" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.068853 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" event={"ID":"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c","Type":"ContainerStarted","Data":"f4eddaa9a7d1e49cbc92f74923a70937aceb95e00a58006b711c2bf1db6626b3"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.071698 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" event={"ID":"687429b1-d68f-4e6e-92f6-24da382d4bfe","Type":"ContainerStarted","Data":"8aca140324ab7494269b3ca07621a7497c2287dfd3fc581f4c3f74bb11a4766a"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.073598 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" event={"ID":"934ec594-4040-486a-9df3-7841f5809127","Type":"ContainerStarted","Data":"7fea29088f65c88e4f23648846b099d25fffc066d2ddaaccea82ee9509b84822"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.077239 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-82d2w" event={"ID":"71700326-fbbb-40ef-a439-3c8feccac4a1","Type":"ContainerStarted","Data":"a782b5b473065738b8da82ccec1f5066da3fa7332d1dfe778088e329b69a361f"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.086884 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" event={"ID":"034dd126-5e75-4772-9464-5ccfdaa0f447","Type":"ContainerStarted","Data":"dbd1318783728d8f4cd595e1b7374b9dc5689a1932e3e972ba5960b26fe263c7"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.091335 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.091936 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.591697781 +0000 UTC m=+165.500864256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.092506 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" event={"ID":"3a8d1b9f-21ff-4c54-9dfe-5337492d861e","Type":"ContainerStarted","Data":"44c0d6ca56c24e7abedfd03d535d083dac3b043527ac4a0b9c4eee6411c3dea2"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.093917 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" event={"ID":"0ea66074-912c-4797-b4a5-cfd5b8927d2e","Type":"ContainerStarted","Data":"659c10530c9733f76f3543e19d7b48244a0ae3b92e7c72a81ca9e3a0aca604e4"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.140554 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" event={"ID":"f71cfd24-83ce-4450-8257-8d9d922d018d","Type":"ContainerStarted","Data":"6127d57bf8309c879f5c2aa7556182df8972724e7191b84f30b9e6e98de57a0b"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.140599 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" event={"ID":"f71cfd24-83ce-4450-8257-8d9d922d018d","Type":"ContainerStarted","Data":"0d9ec4891b813046712ee499287f166db9002ec1ecae43e0c2eabff12a9e9f19"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.154956 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" event={"ID":"9313f05e-3d9f-4a42-a2f2-0fd297a2979d","Type":"ContainerStarted","Data":"9d7b37cc12930e2de5642641b48afc4899cc39f6e3c9988ac63403be24f6aedd"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.170432 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" event={"ID":"00f287a9-208e-4447-9572-cbe1230c61be","Type":"ContainerStarted","Data":"5981bf2ee24569fec6e8794e3e566a2e75f7708083f40d3bfdf1b4767c3a8eea"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.193181 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.195704 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.695692401 +0000 UTC m=+165.604858876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.213552 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" event={"ID":"f7a57ac7-fb31-4740-a91c-79947bbdb195","Type":"ContainerStarted","Data":"c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.213588 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.266145 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" event={"ID":"61255be3-1f4f-4599-8372-c3397004b774","Type":"ContainerStarted","Data":"48565907150d6845542d49d709c120eadc36d7459659fa9d62396ccec80e7fef"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.284380 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.293696 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.294922 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.794903254 +0000 UTC m=+165.704069739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.324035 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" event={"ID":"66a6be2c-da25-42c0-a8fa-075b8273bb65","Type":"ContainerStarted","Data":"b26ebf4b31ad9b755874a090e2400d415f6a366f21084cf982eba0cc6f886633"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.324104 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" event={"ID":"66a6be2c-da25-42c0-a8fa-075b8273bb65","Type":"ContainerStarted","Data":"321309a55803adc9e4242f9518e5893bb505901074141c78c3bd1a4360ba12ef"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.324939 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" podStartSLOduration=88.324924127 podStartE2EDuration="1m28.324924127s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:48.229089477 +0000 UTC m=+165.138255952" watchObservedRunningTime="2026-03-07 07:51:48.324924127 +0000 UTC m=+165.234090602" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.332913 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" event={"ID":"9543f0f5-dfe9-4443-816d-a6a8c4fbb012","Type":"ContainerStarted","Data":"f96d2e9a842a7975628515f6788e8d7c70ddad822143b4791abb173c63db067a"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.342596 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9475l" event={"ID":"0013064e-ed56-415d-b236-1c92e98194d5","Type":"ContainerStarted","Data":"2ccf5948ea8c567256177092591e4d560d551ad15bc9fd8c0f362c3dde3a9172"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.403888 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.407997 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" podStartSLOduration=88.407981246 podStartE2EDuration="1m28.407981246s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:48.367086237 +0000 UTC m=+165.276252712" watchObservedRunningTime="2026-03-07 07:51:48.407981246 +0000 UTC m=+165.317147721" Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.408654 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.908639752 +0000 UTC m=+165.817806307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.410763 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" event={"ID":"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a","Type":"ContainerStarted","Data":"34d2d2cb2341c1b45d586137b043812cd612dfb1dc1f3ddbb01770c811fd8716"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.451866 4761 generic.go:334] "Generic (PLEG): container finished" podID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerID="bc254112e73905e02005900a1949f45c5f06bdef133b8aa181c91e96a5e4cf40" exitCode=0 Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.451922 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" event={"ID":"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0","Type":"ContainerDied","Data":"bc254112e73905e02005900a1949f45c5f06bdef133b8aa181c91e96a5e4cf40"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.455531 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-8vzkp" podStartSLOduration=88.455521337 podStartE2EDuration="1m28.455521337s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:48.408251093 +0000 UTC m=+165.317417568" watchObservedRunningTime="2026-03-07 07:51:48.455521337 +0000 UTC m=+165.364687812" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.468117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" event={"ID":"25717bfc-51a4-4724-bbed-70d94a322755","Type":"ContainerStarted","Data":"df7278c12d9df73bd8ba5149828e481e709960e3b898b7b31b2577a521726b57"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.478605 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lhr9n" event={"ID":"99b83f8b-bc0d-4815-b7ed-26eb344fafac","Type":"ContainerStarted","Data":"feae29b60044ff5c9765f01361e9cd059e173c577a7a0a0c445811658e575aa6"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.492883 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podStartSLOduration=88.49286808 podStartE2EDuration="1m28.49286808s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:48.492452309 +0000 UTC m=+165.401618784" watchObservedRunningTime="2026-03-07 07:51:48.49286808 +0000 UTC m=+165.402034555" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.502302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" event={"ID":"1ebfd20a-723e-45af-ac08-ed82440f1a8f","Type":"ContainerStarted","Data":"a8f2d2556b39ec488c072997dda9f0f2f7c17eb6e08c113d2c1be0a3574c4a59"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.504074 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" event={"ID":"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4","Type":"ContainerStarted","Data":"1a8ffd88bb78fcdf364a600991eeae7a9d036008ba5903cb1ea3df658e782acd"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.504098 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" event={"ID":"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4","Type":"ContainerStarted","Data":"2d717a94686b2f181b89108ed5cd112f1b06179854cfee8e8ba2da8ed764b8b1"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.504854 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.505965 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.005950909 +0000 UTC m=+165.915117384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.510869 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" event={"ID":"f7d70be0-84a3-4969-bbe9-283e1588343a","Type":"ContainerStarted","Data":"6a553a39c8fed3f8c52f3c2f959f83519a54287d30e878c3b67324f42f2ffe58"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.512297 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2lhb8" event={"ID":"55412b4c-53c7-4b21-8d7c-87879ef79ed0","Type":"ContainerStarted","Data":"0c6fac619c77e2e5bbca7ba4216168dfb98fbe2c07537854abdd01da802bb57c"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.512852 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.513515 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" event={"ID":"45228992-9c3e-47bd-a54b-418c9b6183a8","Type":"ContainerStarted","Data":"52e8f4134c8be5fb9f52b575f7764f7364096830275d1e2e9367982f9d97ed25"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.514736 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" event={"ID":"85a8c76a-70df-46fe-af69-21b2b58c0ced","Type":"ContainerStarted","Data":"698f31d2d502eafe05576e5bcc690b1dc5daae173ce7088f3e837584479c6ba0"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.517566 4761 generic.go:334] "Generic (PLEG): container finished" podID="071d5325-8638-4180-aefa-fb07f5533bb2" containerID="ba5a87501d7ae91bcce11fd921bde3ab7f3de84e85f4cf8e6cbcd8fa31c0caaf" exitCode=0 Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.517630 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" event={"ID":"071d5325-8638-4180-aefa-fb07f5533bb2","Type":"ContainerDied","Data":"ba5a87501d7ae91bcce11fd921bde3ab7f3de84e85f4cf8e6cbcd8fa31c0caaf"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.529940 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.529978 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.530291 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-82d2w" podStartSLOduration=6.530282163 podStartE2EDuration="6.530282163s" podCreationTimestamp="2026-03-07 07:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:48.527932156 +0000 UTC m=+165.437098631" watchObservedRunningTime="2026-03-07 07:51:48.530282163 +0000 UTC m=+165.439448628" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.557151 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fsrlc" event={"ID":"7b1e7bf9-5dc9-4326-b63d-426a716351bc","Type":"ContainerStarted","Data":"2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.557196 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fsrlc" event={"ID":"7b1e7bf9-5dc9-4326-b63d-426a716351bc","Type":"ContainerStarted","Data":"5d8c56f6ff97a80ea16e87c27e25c3984cdb01c579b7c368c7a0e106d6b80361"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.587301 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" event={"ID":"69f8f788-a780-4cf1-9ef7-397428d61593","Type":"ContainerStarted","Data":"2f2df3f61605050ff823689a3ab84881edb02d6979ac541c6c9979f7a1145713"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.587349 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" event={"ID":"69f8f788-a780-4cf1-9ef7-397428d61593","Type":"ContainerStarted","Data":"c150a349c466aab661ebc693d49c15af1d9dfe7cb7614720742bde80d20f9114"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.588156 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.592643 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4zfw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.592730 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.595198 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" event={"ID":"4333d454-5d55-4214-af24-c1a056088b2f","Type":"ContainerStarted","Data":"2810f74863354779ecc0c704d61b3b57a5a9b4a0fce043062e9e198c6137eb2b"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.596673 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" event={"ID":"237f8811-62cd-4c45-88e1-9a57d376d192","Type":"ContainerStarted","Data":"28056b96a8083b795866591f3e42a9bbdffd85ea38b4931fe2c77b75e49b8515"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.603891 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" event={"ID":"71ec20b6-ead9-496e-bd0d-97702212e64d","Type":"ContainerStarted","Data":"369c5994ce7c4789b8e6e2c88193729c2332c42f27270d322725e5f685dc2e2f"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.606818 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.610192 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.110175775 +0000 UTC m=+166.019342250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.711116 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.711233 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.211215083 +0000 UTC m=+166.120381558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.711795 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.712887 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.212868973 +0000 UTC m=+166.122035518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.812603 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.812781 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.312756033 +0000 UTC m=+166.221922508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.813142 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.813485 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.313471571 +0000 UTC m=+166.222638046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.869677 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.914179 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.914989 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.41497001 +0000 UTC m=+166.324136485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.929503 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:48 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:48 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:48 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.929853 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.005566 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" podStartSLOduration=89.005546872 podStartE2EDuration="1m29.005546872s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.000360916 +0000 UTC m=+165.909527401" watchObservedRunningTime="2026-03-07 07:51:49.005546872 +0000 UTC m=+165.914713347" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.024186 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.024569 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.524556857 +0000 UTC m=+166.433723322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.125067 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.125507 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.625487282 +0000 UTC m=+166.534653757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.136180 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" podStartSLOduration=89.136163313 podStartE2EDuration="1m29.136163313s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.134218455 +0000 UTC m=+166.043384930" watchObservedRunningTime="2026-03-07 07:51:49.136163313 +0000 UTC m=+166.045329788" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.199546 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" podStartSLOduration=89.19953124 podStartE2EDuration="1m29.19953124s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.191166036 +0000 UTC m=+166.100332511" watchObservedRunningTime="2026-03-07 07:51:49.19953124 +0000 UTC m=+166.108697715" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.226691 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.227093 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.727078163 +0000 UTC m=+166.636244638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.292816 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" podStartSLOduration=89.292800869 podStartE2EDuration="1m29.292800869s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.24984714 +0000 UTC m=+166.159013615" watchObservedRunningTime="2026-03-07 07:51:49.292800869 +0000 UTC m=+166.201967334" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.328094 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.328420 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.828405758 +0000 UTC m=+166.737572233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.420020 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podStartSLOduration=89.420003966 podStartE2EDuration="1m29.420003966s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.316768794 +0000 UTC m=+166.225935269" watchObservedRunningTime="2026-03-07 07:51:49.420003966 +0000 UTC m=+166.329170441" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.436395 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.436733 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.936707844 +0000 UTC m=+166.845874319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.451003 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2lhb8" podStartSLOduration=89.450986212 podStartE2EDuration="1m29.450986212s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.419633017 +0000 UTC m=+166.328799492" watchObservedRunningTime="2026-03-07 07:51:49.450986212 +0000 UTC m=+166.360152687" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.496187 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" podStartSLOduration=89.496167576 podStartE2EDuration="1m29.496167576s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.451525986 +0000 UTC m=+166.360692481" watchObservedRunningTime="2026-03-07 07:51:49.496167576 +0000 UTC m=+166.405334051" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.523947 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" podStartSLOduration=7.523928504 podStartE2EDuration="7.523928504s" podCreationTimestamp="2026-03-07 07:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.496966076 +0000 UTC m=+166.406132541" watchObservedRunningTime="2026-03-07 07:51:49.523928504 +0000 UTC m=+166.433094979" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.545275 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.545590 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.045575153 +0000 UTC m=+166.954741628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.580264 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" podStartSLOduration=89.580249519 podStartE2EDuration="1m29.580249519s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.573799311 +0000 UTC m=+166.482965786" watchObservedRunningTime="2026-03-07 07:51:49.580249519 +0000 UTC m=+166.489415994" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.580900 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" podStartSLOduration=89.580895805 podStartE2EDuration="1m29.580895805s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.525502873 +0000 UTC m=+166.434669358" watchObservedRunningTime="2026-03-07 07:51:49.580895805 +0000 UTC m=+166.490062280" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.595038 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" podStartSLOduration=89.59501911 podStartE2EDuration="1m29.59501911s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.594862496 +0000 UTC m=+166.504028991" watchObservedRunningTime="2026-03-07 07:51:49.59501911 +0000 UTC m=+166.504185585" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.630495 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-lhr9n" podStartSLOduration=7.630473416 podStartE2EDuration="7.630473416s" podCreationTimestamp="2026-03-07 07:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.626793756 +0000 UTC m=+166.535960231" watchObservedRunningTime="2026-03-07 07:51:49.630473416 +0000 UTC m=+166.539639891" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.647226 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fsrlc" podStartSLOduration=89.647213505 podStartE2EDuration="1m29.647213505s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.645355229 +0000 UTC m=+166.554521714" watchObservedRunningTime="2026-03-07 07:51:49.647213505 +0000 UTC m=+166.556379980" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.648081 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" event={"ID":"1ebfd20a-723e-45af-ac08-ed82440f1a8f","Type":"ContainerStarted","Data":"6f5c2c1ed74fd079a17fe6dba95da5f0d42d3e407d691e9cdac1f93363486c07"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.648748 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.649065 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.149054259 +0000 UTC m=+167.058220734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.650805 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" event={"ID":"00f287a9-208e-4447-9572-cbe1230c61be","Type":"ContainerStarted","Data":"fba594bf2f60e16f46068d3160eff1095f288fdad6cb0b44f840487d3a1b142b"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.651873 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" event={"ID":"034dd126-5e75-4772-9464-5ccfdaa0f447","Type":"ContainerStarted","Data":"b73a389fa9b79d5b3968335415af9746df829c458eb711bfcd62c31fb98666ab"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.654548 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" event={"ID":"3a8d1b9f-21ff-4c54-9dfe-5337492d861e","Type":"ContainerStarted","Data":"ba9fb243bec4ae615dd93b5e8e4644913a25e31b8991a0572bef1d35c9a04b98"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.661565 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" event={"ID":"0ea66074-912c-4797-b4a5-cfd5b8927d2e","Type":"ContainerStarted","Data":"c24e0c02f1354d47f13663f7ab1c31413ac24da72c1da481aa037102e80c6c72"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.662134 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.663101 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" event={"ID":"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c","Type":"ContainerStarted","Data":"f653805564530b3c9f24594859cb576b1b2a5aeba158e738d4d818f245ec5dbe"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.663511 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.663570 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.663596 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.664292 4761 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5n9bv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.664313 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podUID="9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.664539 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" event={"ID":"71ec20b6-ead9-496e-bd0d-97702212e64d","Type":"ContainerStarted","Data":"013056b0040015113182560c14699f07344cb8e1128183fb51d69460d98786f5"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.665154 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.669941 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.669995 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.671155 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c9px5" event={"ID":"d16bf67b-8e20-4f35-bf5c-d7e923919679","Type":"ContainerStarted","Data":"659a8ddfbd16af278c9dd0fe13d2dd391bbe05fef245e8d51c1ea994e39105d4"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.674246 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" event={"ID":"85a8c76a-70df-46fe-af69-21b2b58c0ced","Type":"ContainerStarted","Data":"49665d02b399aa8de25452c6a2a27ad38905655aa513a0fae6ec34b53ae8ca7b"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.675631 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" event={"ID":"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a","Type":"ContainerStarted","Data":"4a7e62bda6144ef3d0bfd6105d4436c0517e5e52046d989c0498da1e8c8b0fbd"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.676830 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" event={"ID":"25717bfc-51a4-4724-bbed-70d94a322755","Type":"ContainerStarted","Data":"38e7f78b734bf48b470e7caceb7fa4b6288c04aa45c4b0a362aae0273188ae2a"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.678117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" event={"ID":"61255be3-1f4f-4599-8372-c3397004b774","Type":"ContainerStarted","Data":"1cba8e851c67f99c1a0b544a2787a074a5b57394e460b3ea72b45435d898849d"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.681275 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" event={"ID":"9313f05e-3d9f-4a42-a2f2-0fd297a2979d","Type":"ContainerStarted","Data":"ecba58189f3b9fad85e87478f4f88a084bf7d5540445506cf4c010e5e3c83052"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.685228 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" event={"ID":"934ec594-4040-486a-9df3-7841f5809127","Type":"ContainerStarted","Data":"2179aee900f37163abf08bb1ae6e74c836bd0ab3c3995d41eb5a4f1bce3584f8"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.686977 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" podStartSLOduration=89.686960215 podStartE2EDuration="1m29.686960215s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.685038168 +0000 UTC m=+166.594204643" watchObservedRunningTime="2026-03-07 07:51:49.686960215 +0000 UTC m=+166.596126690" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.693173 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" event={"ID":"45228992-9c3e-47bd-a54b-418c9b6183a8","Type":"ContainerStarted","Data":"7717bc6fd45fafa5d6ac6b7f93e8b30b460745e34aa04d1a38cc9b0e4ccea5c5"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.694489 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4zfw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.694532 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.695072 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.695090 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.700751 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podStartSLOduration=89.700734992 podStartE2EDuration="1m29.700734992s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.699806709 +0000 UTC m=+166.608973184" watchObservedRunningTime="2026-03-07 07:51:49.700734992 +0000 UTC m=+166.609901467" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.712569 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.712604 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.714418 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podStartSLOduration=89.714397506 podStartE2EDuration="1m29.714397506s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.71374965 +0000 UTC m=+166.622916125" watchObservedRunningTime="2026-03-07 07:51:49.714397506 +0000 UTC m=+166.623563981" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.735972 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" podStartSLOduration=89.735956122 podStartE2EDuration="1m29.735956122s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.734150458 +0000 UTC m=+166.643316933" watchObservedRunningTime="2026-03-07 07:51:49.735956122 +0000 UTC m=+166.645122597" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.749756 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.750978 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.250953598 +0000 UTC m=+167.160120073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.761501 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" podStartSLOduration=89.761482066 podStartE2EDuration="1m29.761482066s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.758212996 +0000 UTC m=+166.667379471" watchObservedRunningTime="2026-03-07 07:51:49.761482066 +0000 UTC m=+166.670648611" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.854396 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.856547 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.356532117 +0000 UTC m=+167.265698592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.911083 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:49 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:49 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:49 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.911404 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.956111 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.956487 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.456454778 +0000 UTC m=+167.365621253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.956786 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.957188 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.457166575 +0000 UTC m=+167.366333060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.057873 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.058053 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.558024999 +0000 UTC m=+167.467191474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.058515 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.058890 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.55887231 +0000 UTC m=+167.468038785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.159227 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.159457 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.659416696 +0000 UTC m=+167.568583171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.159515 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.160180 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.660169954 +0000 UTC m=+167.569336429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.261188 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.261655 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.761635512 +0000 UTC m=+167.670801987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.363046 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.363406 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.863388458 +0000 UTC m=+167.772554933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.464209 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.464441 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.964414075 +0000 UTC m=+167.873580550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.464560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.464998 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.964981969 +0000 UTC m=+167.874148444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.481244 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cm8bz"] Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.565800 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.565937 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.065911895 +0000 UTC m=+167.975078370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.566056 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.566356 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.066348795 +0000 UTC m=+167.975515270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.667507 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.667655 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.167629519 +0000 UTC m=+168.076795994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.667821 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.668228 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.168211123 +0000 UTC m=+168.077377598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.701421 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" event={"ID":"00f287a9-208e-4447-9572-cbe1230c61be","Type":"ContainerStarted","Data":"74748be7f81aceb5d50fa174e7637bf8e98b69cb3ac2137b55d0d0ed2802f255"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.703377 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" event={"ID":"071d5325-8638-4180-aefa-fb07f5533bb2","Type":"ContainerStarted","Data":"7281e063d5faf0d20ee3ef47d8affd31b02291545b1a1f3ed8ea99ba38ed3845"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.705600 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" event={"ID":"4333d454-5d55-4214-af24-c1a056088b2f","Type":"ContainerStarted","Data":"e8746ff1a7efb32c41919a7364e88252f7db7e4c34142347e13cac052e4be7dd"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.707056 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9475l" event={"ID":"0013064e-ed56-415d-b236-1c92e98194d5","Type":"ContainerStarted","Data":"911a21231284744084eff0d783810220f65dda9e055d71e4dfd6c65d13cc7bff"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.708382 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" event={"ID":"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0","Type":"ContainerStarted","Data":"43fecf17bd70cc24f894f9981f36f699613214c657ae37df741e21de54a09dc3"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.708771 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.709769 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" event={"ID":"25717bfc-51a4-4724-bbed-70d94a322755","Type":"ContainerStarted","Data":"6801f5f398f60f8cca3cb48f6bcfe174267a879c24b0e6d47d8d9eb908cb3029"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.710096 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.711176 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" event={"ID":"9313f05e-3d9f-4a42-a2f2-0fd297a2979d","Type":"ContainerStarted","Data":"523849b6303b38d4be96dda537a0df33f83360cec3dbc0d2ae5c5423692352b1"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.714050 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" event={"ID":"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a","Type":"ContainerStarted","Data":"1156347f0a27bf0fb6efe48b02c479175805811e2925f39f4a1f8eb62f2c36ab"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.715868 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" event={"ID":"1ebfd20a-723e-45af-ac08-ed82440f1a8f","Type":"ContainerStarted","Data":"c05feadf9baf39ccfb789537bd35e377668442c144a612e44796a798762df843"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.717304 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" event={"ID":"237f8811-62cd-4c45-88e1-9a57d376d192","Type":"ContainerStarted","Data":"c293d76a37beb989b19c97b30dc084f51f3b4f36d2bdb4b107d0cba1f663f6a0"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.719409 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c9px5" event={"ID":"d16bf67b-8e20-4f35-bf5c-d7e923919679","Type":"ContainerStarted","Data":"4c091b23b2e4a0e28ef8373f43189cf78f21a02b28cb9b1334e1b8f53b9689a1"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.719532 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c9px5" event={"ID":"d16bf67b-8e20-4f35-bf5c-d7e923919679","Type":"ContainerStarted","Data":"31c95a9c5a5fb5c6362b101a826f78c79378a25e50483e3d6762a143b32c9261"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.719626 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.721622 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" event={"ID":"934ec594-4040-486a-9df3-7841f5809127","Type":"ContainerStarted","Data":"0ab4bbd1feb7115a6af09dc4b5edea81ee5a55ea57991824a2769d966e126bbe"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722051 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4zfw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722146 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722190 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722255 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722462 4761 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5n9bv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722506 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podUID="9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722572 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722591 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.745831 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" podStartSLOduration=90.745809839 podStartE2EDuration="1m30.745809839s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.744269841 +0000 UTC m=+167.653436316" watchObservedRunningTime="2026-03-07 07:51:50.745809839 +0000 UTC m=+167.654976314" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.767475 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" podStartSLOduration=90.767455147 podStartE2EDuration="1m30.767455147s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.765549661 +0000 UTC m=+167.674716136" watchObservedRunningTime="2026-03-07 07:51:50.767455147 +0000 UTC m=+167.676621632" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.769199 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.769509 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.269493937 +0000 UTC m=+168.178660412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.790386 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" podStartSLOduration=90.790368967 podStartE2EDuration="1m30.790368967s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.789988608 +0000 UTC m=+167.699155083" watchObservedRunningTime="2026-03-07 07:51:50.790368967 +0000 UTC m=+167.699535442" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.796337 4761 patch_prober.go:28] interesting pod/apiserver-76f77b778f-g5b4l container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]log ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]etcd ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/generic-apiserver-start-informers ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/max-in-flight-filter ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 07 07:51:50 crc kubenswrapper[4761]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 07 07:51:50 crc kubenswrapper[4761]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/project.openshift.io-projectcache ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/openshift.io-startinformers ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 07 07:51:50 crc kubenswrapper[4761]: livez check failed Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.796391 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" podUID="f7d70be0-84a3-4969-bbe9-283e1588343a" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.826552 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" podStartSLOduration=90.82652065 podStartE2EDuration="1m30.82652065s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.825050024 +0000 UTC m=+167.734216489" watchObservedRunningTime="2026-03-07 07:51:50.82652065 +0000 UTC m=+167.735687125" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.864445 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" podStartSLOduration=90.864429566 podStartE2EDuration="1m30.864429566s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.861047034 +0000 UTC m=+167.770213519" watchObservedRunningTime="2026-03-07 07:51:50.864429566 +0000 UTC m=+167.773596041" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.873953 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.880386 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.380367195 +0000 UTC m=+168.289533750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.894846 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:50 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:50 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:50 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.894948 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.901110 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podStartSLOduration=90.901087842 podStartE2EDuration="1m30.901087842s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.886349412 +0000 UTC m=+167.795515887" watchObservedRunningTime="2026-03-07 07:51:50.901087842 +0000 UTC m=+167.810254317" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.919805 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" podStartSLOduration=90.919786708 podStartE2EDuration="1m30.919786708s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.918258491 +0000 UTC m=+167.827424976" watchObservedRunningTime="2026-03-07 07:51:50.919786708 +0000 UTC m=+167.828953183" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.922738 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46024: no serving certificate available for the kubelet" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.964162 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" podStartSLOduration=90.964127681 podStartE2EDuration="1m30.964127681s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.958693279 +0000 UTC m=+167.867859774" watchObservedRunningTime="2026-03-07 07:51:50.964127681 +0000 UTC m=+167.873294156" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.975246 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.975626 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.475612532 +0000 UTC m=+168.384778997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.994447 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" podStartSLOduration=90.994430502 podStartE2EDuration="1m30.994430502s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.990234779 +0000 UTC m=+167.899401254" watchObservedRunningTime="2026-03-07 07:51:50.994430502 +0000 UTC m=+167.903596967" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.020646 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-c9px5" podStartSLOduration=8.020630912 podStartE2EDuration="8.020630912s" podCreationTimestamp="2026-03-07 07:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:51.016961832 +0000 UTC m=+167.926128307" watchObservedRunningTime="2026-03-07 07:51:51.020630912 +0000 UTC m=+167.929797387" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.065422 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" podStartSLOduration=91.065399365 podStartE2EDuration="1m31.065399365s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:51.065039166 +0000 UTC m=+167.974205641" watchObservedRunningTime="2026-03-07 07:51:51.065399365 +0000 UTC m=+167.974565850" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.065914 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.076999 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.077473 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.577453359 +0000 UTC m=+168.486619914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.082216 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46030: no serving certificate available for the kubelet" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.107382 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podStartSLOduration=91.10734176 podStartE2EDuration="1m31.10734176s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:51.106019417 +0000 UTC m=+168.015185912" watchObservedRunningTime="2026-03-07 07:51:51.10734176 +0000 UTC m=+168.016508245" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.178059 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.178202 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.67818316 +0000 UTC m=+168.587349635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.178617 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.179087 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.679067771 +0000 UTC m=+168.588234246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.280185 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.280629 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.780596581 +0000 UTC m=+168.689763066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.291283 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46034: no serving certificate available for the kubelet" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.378897 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46038: no serving certificate available for the kubelet" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.381374 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.381637 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.881617469 +0000 UTC m=+168.790783944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.482780 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.483238 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.983220491 +0000 UTC m=+168.892386966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.486386 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46054: no serving certificate available for the kubelet" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.579283 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46056: no serving certificate available for the kubelet" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.584558 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.585013 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.084994487 +0000 UTC m=+168.994160962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.617367 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l9gzh"] Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.617844 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" podUID="6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" containerName="controller-manager" containerID="cri-o://90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29" gracePeriod=30 Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.634844 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc"] Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.635054 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" podUID="ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" containerName="route-controller-manager" containerID="cri-o://db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7" gracePeriod=30 Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.685831 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.686168 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.186144987 +0000 UTC m=+169.095311452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.690130 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46068: no serving certificate available for the kubelet" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.720179 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.729068 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9475l" event={"ID":"0013064e-ed56-415d-b236-1c92e98194d5","Type":"ContainerStarted","Data":"3b57912174fe42497b7aba9165fb6ad40f157341a0decbb6ac128ae2810735a8"} Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.729276 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9475l" event={"ID":"0013064e-ed56-415d-b236-1c92e98194d5","Type":"ContainerStarted","Data":"ff0ff765c2a04066b4e318be46ec3c11080249a5e4e5b24253efc6d1a90e45ed"} Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.731064 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" gracePeriod=30 Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.735422 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.763010 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.773662 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=0.773644855 podStartE2EDuration="773.644855ms" podCreationTimestamp="2026-03-07 07:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:51.770919368 +0000 UTC m=+168.680085843" watchObservedRunningTime="2026-03-07 07:51:51.773644855 +0000 UTC m=+168.682811320" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.791413 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.791684 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.291673305 +0000 UTC m=+169.200839780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.802915 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46076: no serving certificate available for the kubelet" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.892403 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.893421 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.39340615 +0000 UTC m=+169.302572625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.897310 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:51 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:51 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:51 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.897357 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.909254 4761 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.995399 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.995682 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.495671708 +0000 UTC m=+169.404838183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.096491 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.096834 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.596818268 +0000 UTC m=+169.505984743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.200561 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.200930 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.700914721 +0000 UTC m=+169.610081196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.219417 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.300394 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.301170 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.301508 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.801480967 +0000 UTC m=+169.710647442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402240 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vx7x\" (UniqueName: \"kubernetes.io/projected/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-kube-api-access-4vx7x\") pod \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402293 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ggp8\" (UniqueName: \"kubernetes.io/projected/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-kube-api-access-2ggp8\") pod \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402368 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-config\") pod \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402509 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-proxy-ca-bundles\") pod \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402530 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-client-ca\") pod \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402584 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-serving-cert\") pod \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402601 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-config\") pod \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402617 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-serving-cert\") pod \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402634 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-client-ca\") pod \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402858 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.403124 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.90310981 +0000 UTC m=+169.812276285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.403177 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-client-ca" (OuterVolumeSpecName: "client-ca") pod "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" (UID: "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.403192 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-config" (OuterVolumeSpecName: "config") pod "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" (UID: "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.403233 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" (UID: "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.403383 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-client-ca" (OuterVolumeSpecName: "client-ca") pod "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" (UID: "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.403600 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-config" (OuterVolumeSpecName: "config") pod "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" (UID: "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.408078 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" (UID: "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.408642 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-kube-api-access-4vx7x" (OuterVolumeSpecName: "kube-api-access-4vx7x") pod "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" (UID: "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f"). InnerVolumeSpecName "kube-api-access-4vx7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.408694 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-kube-api-access-2ggp8" (OuterVolumeSpecName: "kube-api-access-2ggp8") pod "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" (UID: "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf"). InnerVolumeSpecName "kube-api-access-2ggp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.408692 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" (UID: "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.462059 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46080: no serving certificate available for the kubelet" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504293 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.504488 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:53.004458145 +0000 UTC m=+169.913624620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504833 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504925 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504947 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504959 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504971 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504982 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vx7x\" (UniqueName: \"kubernetes.io/projected/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-kube-api-access-4vx7x\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504994 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ggp8\" (UniqueName: \"kubernetes.io/projected/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-kube-api-access-2ggp8\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.505005 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.505016 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.505026 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.505234 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:53.005217214 +0000 UTC m=+169.914383699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.606246 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.606439 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:53.106414306 +0000 UTC m=+170.015580781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.606592 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.606962 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:53.106949469 +0000 UTC m=+170.016115934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.648164 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-phm95"] Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.648369 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" containerName="route-controller-manager" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.648381 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" containerName="route-controller-manager" Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.648397 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" containerName="controller-manager" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.648403 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" containerName="controller-manager" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.648487 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" containerName="controller-manager" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.648508 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" containerName="route-controller-manager" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.649230 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.650764 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.661582 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-phm95"] Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.695923 4761 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-07T07:51:51.909504323Z","Handler":null,"Name":""} Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.698649 4761 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.698681 4761 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.707336 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.710408 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.734553 4761 generic.go:334] "Generic (PLEG): container finished" podID="6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" containerID="90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29" exitCode=0 Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.734622 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" event={"ID":"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf","Type":"ContainerDied","Data":"90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29"} Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.734654 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" event={"ID":"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf","Type":"ContainerDied","Data":"882cc6d1d8ff9baed06c9225585998a42ca5bcddd90b232ac913143f0ea4ff01"} Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.734651 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.734673 4761 scope.go:117] "RemoveContainer" containerID="90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.737578 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9475l" event={"ID":"0013064e-ed56-415d-b236-1c92e98194d5","Type":"ContainerStarted","Data":"fe79b6e4aa767a854c12d0d69758ab5f893db0ab2843ebc311cd50f23d34a53d"} Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.739495 4761 generic.go:334] "Generic (PLEG): container finished" podID="ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" containerID="db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7" exitCode=0 Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.739579 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.739637 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" event={"ID":"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f","Type":"ContainerDied","Data":"db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7"} Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.739661 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" event={"ID":"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f","Type":"ContainerDied","Data":"bb6ae6626dd795e81aac77ed37f761416d9c9519413f017ccbc7c4679f0bbc42"} Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.745893 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.753941 4761 scope.go:117] "RemoveContainer" containerID="90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29" Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.754313 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29\": container with ID starting with 90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29 not found: ID does not exist" containerID="90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.754350 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29"} err="failed to get container status \"90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29\": rpc error: code = NotFound desc = could not find container \"90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29\": container with ID starting with 90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29 not found: ID does not exist" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.754389 4761 scope.go:117] "RemoveContainer" containerID="db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.761319 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9475l" podStartSLOduration=10.761303649 podStartE2EDuration="10.761303649s" podCreationTimestamp="2026-03-07 07:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:52.760164921 +0000 UTC m=+169.669331396" watchObservedRunningTime="2026-03-07 07:51:52.761303649 +0000 UTC m=+169.670470124" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.772677 4761 scope.go:117] "RemoveContainer" containerID="db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7" Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.773111 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7\": container with ID starting with db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7 not found: ID does not exist" containerID="db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.773160 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7"} err="failed to get container status \"db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7\": rpc error: code = NotFound desc = could not find container \"db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7\": container with ID starting with db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7 not found: ID does not exist" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.806319 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc"] Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.808557 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.808604 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxh7b\" (UniqueName: \"kubernetes.io/projected/4601b717-e620-42a5-9f21-3b6fea1e71ff-kube-api-access-pxh7b\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.808635 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-utilities\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.808725 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-catalog-content\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.813634 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.813669 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.814308 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc"] Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.820499 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l9gzh"] Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.828222 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l9gzh"] Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.841416 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ztv97"] Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.842293 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.846311 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.863579 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.889780 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:52 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:52 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:52 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.889837 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.909088 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ztv97"] Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.909643 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-catalog-content\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.909863 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxh7b\" (UniqueName: \"kubernetes.io/projected/4601b717-e620-42a5-9f21-3b6fea1e71ff-kube-api-access-pxh7b\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.909918 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-utilities\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.910235 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-catalog-content\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.910248 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-utilities\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.932595 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxh7b\" (UniqueName: \"kubernetes.io/projected/4601b717-e620-42a5-9f21-3b6fea1e71ff-kube-api-access-pxh7b\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.964226 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.010981 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-catalog-content\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.012291 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-utilities\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.012503 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjlrz\" (UniqueName: \"kubernetes.io/projected/af0bdacc-ab60-43aa-adf2-86894b0896e3-kube-api-access-bjlrz\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.035862 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.049740 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jzrwt"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.050678 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.084271 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jzrwt"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.114677 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjlrz\" (UniqueName: \"kubernetes.io/projected/af0bdacc-ab60-43aa-adf2-86894b0896e3-kube-api-access-bjlrz\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.114780 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-catalog-content\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.114828 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-utilities\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.118389 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-catalog-content\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.118630 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-utilities\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.139008 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjlrz\" (UniqueName: \"kubernetes.io/projected/af0bdacc-ab60-43aa-adf2-86894b0896e3-kube-api-access-bjlrz\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.177777 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.212908 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-phm95"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.221790 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-utilities\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.221863 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-catalog-content\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.221904 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjzgb\" (UniqueName: \"kubernetes.io/projected/ace45696-b259-49f7-bfd9-8afe2557ac3e-kube-api-access-zjzgb\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.255935 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8klgk"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.258811 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: W0307 07:51:53.266686 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4601b717_e620_42a5_9f21_3b6fea1e71ff.slice/crio-87669bb4bd1b22af2f1cf3323992c4d2932aba3177404dd39bc77b7522579d9f WatchSource:0}: Error finding container 87669bb4bd1b22af2f1cf3323992c4d2932aba3177404dd39bc77b7522579d9f: Status 404 returned error can't find the container with id 87669bb4bd1b22af2f1cf3323992c4d2932aba3177404dd39bc77b7522579d9f Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.278695 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8klgk"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.323212 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-utilities\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.323301 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-catalog-content\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.323342 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjzgb\" (UniqueName: \"kubernetes.io/projected/ace45696-b259-49f7-bfd9-8afe2557ac3e-kube-api-access-zjzgb\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.323701 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-utilities\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.323917 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-catalog-content\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.343202 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjzgb\" (UniqueName: \"kubernetes.io/projected/ace45696-b259-49f7-bfd9-8afe2557ac3e-kube-api-access-zjzgb\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.382407 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.383170 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.386118 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.386598 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.386779 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.387496 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.387657 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.387881 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.388607 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7757844dd9-dwvqd"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.392183 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.397955 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.398053 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.398350 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.398475 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.398605 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.399566 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7757844dd9-dwvqd"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.400366 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.407002 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.411566 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.427791 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg7qd\" (UniqueName: \"kubernetes.io/projected/d4b1310d-3887-4489-bbe0-5c63cd91603b-kube-api-access-wg7qd\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.427859 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-catalog-content\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.427946 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-utilities\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.436937 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.483130 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ztv97"] Mar 07 07:51:53 crc kubenswrapper[4761]: W0307 07:51:53.486192 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf0bdacc_ab60_43aa_adf2_86894b0896e3.slice/crio-380bf8edebb71ccc54dc5753c5a6aefa35966a99189fc44f9ac78aa54408029b WatchSource:0}: Error finding container 380bf8edebb71ccc54dc5753c5a6aefa35966a99189fc44f9ac78aa54408029b: Status 404 returned error can't find the container with id 380bf8edebb71ccc54dc5753c5a6aefa35966a99189fc44f9ac78aa54408029b Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.528719 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-proxy-ca-bundles\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.528816 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-catalog-content\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.528843 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-config\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.528882 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-client-ca\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529354 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-catalog-content\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.528899 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69cbaffe-e087-4b95-9943-d13f4455a667-serving-cert\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529418 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-client-ca\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529450 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7p22\" (UniqueName: \"kubernetes.io/projected/4b2f038d-4913-4c34-bf43-b97fe1d898d2-kube-api-access-q7p22\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529510 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9467\" (UniqueName: \"kubernetes.io/projected/69cbaffe-e087-4b95-9943-d13f4455a667-kube-api-access-x9467\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529592 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-utilities\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529633 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b2f038d-4913-4c34-bf43-b97fe1d898d2-serving-cert\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529656 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg7qd\" (UniqueName: \"kubernetes.io/projected/d4b1310d-3887-4489-bbe0-5c63cd91603b-kube-api-access-wg7qd\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529686 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-config\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.530211 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-utilities\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.574509 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg7qd\" (UniqueName: \"kubernetes.io/projected/d4b1310d-3887-4489-bbe0-5c63cd91603b-kube-api-access-wg7qd\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.597116 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.602303 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ls7db"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631151 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-proxy-ca-bundles\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631221 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-config\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631281 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-client-ca\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631301 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69cbaffe-e087-4b95-9943-d13f4455a667-serving-cert\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631316 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-client-ca\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631364 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7p22\" (UniqueName: \"kubernetes.io/projected/4b2f038d-4913-4c34-bf43-b97fe1d898d2-kube-api-access-q7p22\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631381 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9467\" (UniqueName: \"kubernetes.io/projected/69cbaffe-e087-4b95-9943-d13f4455a667-kube-api-access-x9467\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631642 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b2f038d-4913-4c34-bf43-b97fe1d898d2-serving-cert\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631707 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-config\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.633582 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-client-ca\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.633647 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-client-ca\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.633793 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-proxy-ca-bundles\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.636584 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-config\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.637462 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-config\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.638419 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69cbaffe-e087-4b95-9943-d13f4455a667-serving-cert\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.638909 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b2f038d-4913-4c34-bf43-b97fe1d898d2-serving-cert\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.664210 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7p22\" (UniqueName: \"kubernetes.io/projected/4b2f038d-4913-4c34-bf43-b97fe1d898d2-kube-api-access-q7p22\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.667457 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9467\" (UniqueName: \"kubernetes.io/projected/69cbaffe-e087-4b95-9943-d13f4455a667-kube-api-access-x9467\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.717450 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" path="/var/lib/kubelet/pods/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf/volumes" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.718440 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.719177 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" path="/var/lib/kubelet/pods/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f/volumes" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.721233 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.733599 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.772420 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46086: no serving certificate available for the kubelet" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.783071 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" event={"ID":"473ecd8c-4e56-40ac-9444-2d43490c6424","Type":"ContainerStarted","Data":"4dd05b87400e520fab187d8e6fc531d0b912721b961e43f10251c6818333d374"} Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.784868 4761 generic.go:334] "Generic (PLEG): container finished" podID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerID="1c3274c0a25c242c822a9a96600580a87d121dad2e64c3584b09930e252e967b" exitCode=0 Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.784941 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztv97" event={"ID":"af0bdacc-ab60-43aa-adf2-86894b0896e3","Type":"ContainerDied","Data":"1c3274c0a25c242c822a9a96600580a87d121dad2e64c3584b09930e252e967b"} Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.784962 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztv97" event={"ID":"af0bdacc-ab60-43aa-adf2-86894b0896e3","Type":"ContainerStarted","Data":"380bf8edebb71ccc54dc5753c5a6aefa35966a99189fc44f9ac78aa54408029b"} Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.795998 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.796117 4761 generic.go:334] "Generic (PLEG): container finished" podID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerID="79cf6c37dd3da83bfe64b281cbf9b5693aab6cce5559515b7d72605580520781" exitCode=0 Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.796643 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phm95" event={"ID":"4601b717-e620-42a5-9f21-3b6fea1e71ff","Type":"ContainerDied","Data":"79cf6c37dd3da83bfe64b281cbf9b5693aab6cce5559515b7d72605580520781"} Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.796669 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phm95" event={"ID":"4601b717-e620-42a5-9f21-3b6fea1e71ff","Type":"ContainerStarted","Data":"87669bb4bd1b22af2f1cf3323992c4d2932aba3177404dd39bc77b7522579d9f"} Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.890230 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:53 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:53 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:53 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.890568 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.897253 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jzrwt"] Mar 07 07:51:53 crc kubenswrapper[4761]: W0307 07:51:53.915451 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace45696_b259_49f7_bfd9_8afe2557ac3e.slice/crio-0b17a3567c82ae855493975810bebedc8e189fc6631bb28e67e239c75520eef1 WatchSource:0}: Error finding container 0b17a3567c82ae855493975810bebedc8e189fc6631bb28e67e239c75520eef1: Status 404 returned error can't find the container with id 0b17a3567c82ae855493975810bebedc8e189fc6631bb28e67e239c75520eef1 Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.990911 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2"] Mar 07 07:51:53 crc kubenswrapper[4761]: W0307 07:51:53.994646 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69cbaffe_e087_4b95_9943_d13f4455a667.slice/crio-ba720ee940cc5e6fe43e1271eb44ff8d5bf3d868b7008616bf7cd88e9705b9fd WatchSource:0}: Error finding container ba720ee940cc5e6fe43e1271eb44ff8d5bf3d868b7008616bf7cd88e9705b9fd: Status 404 returned error can't find the container with id ba720ee940cc5e6fe43e1271eb44ff8d5bf3d868b7008616bf7cd88e9705b9fd Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.073316 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8klgk"] Mar 07 07:51:54 crc kubenswrapper[4761]: W0307 07:51:54.083168 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4b1310d_3887_4489_bbe0_5c63cd91603b.slice/crio-afdb857524a0d1a4abf8957b56f8511a734c47a465cce76fa63859b050ae2b35 WatchSource:0}: Error finding container afdb857524a0d1a4abf8957b56f8511a734c47a465cce76fa63859b050ae2b35: Status 404 returned error can't find the container with id afdb857524a0d1a4abf8957b56f8511a734c47a465cce76fa63859b050ae2b35 Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.273469 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7757844dd9-dwvqd"] Mar 07 07:51:54 crc kubenswrapper[4761]: W0307 07:51:54.279382 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b2f038d_4913_4c34_bf43_b97fe1d898d2.slice/crio-7ba32e063662cfd1fcdc98e13e3195822f8ddde6d39d1e1e6f8d2088db74550f WatchSource:0}: Error finding container 7ba32e063662cfd1fcdc98e13e3195822f8ddde6d39d1e1e6f8d2088db74550f: Status 404 returned error can't find the container with id 7ba32e063662cfd1fcdc98e13e3195822f8ddde6d39d1e1e6f8d2088db74550f Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.647195 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2xc9s"] Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.648506 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: W0307 07:51:54.650976 4761 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": failed to list *v1.Secret: secrets "redhat-marketplace-dockercfg-x2ctb" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Mar 07 07:51:54 crc kubenswrapper[4761]: E0307 07:51:54.651016 4761 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-x2ctb\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"redhat-marketplace-dockercfg-x2ctb\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.666933 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xc9s"] Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.713928 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.719843 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.749508 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.750187 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.758296 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.758903 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-catalog-content\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.758942 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-utilities\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.758999 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr6qf\" (UniqueName: \"kubernetes.io/projected/e614b274-38db-4951-8f55-a09c49011cb5-kube-api-access-mr6qf\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.769957 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.770234 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.803671 4761 generic.go:334] "Generic (PLEG): container finished" podID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerID="8d34025fd319ffd3e6850da4893b9537f4f1e29dec8fa6d5bb750de89505362c" exitCode=0 Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.803777 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzrwt" event={"ID":"ace45696-b259-49f7-bfd9-8afe2557ac3e","Type":"ContainerDied","Data":"8d34025fd319ffd3e6850da4893b9537f4f1e29dec8fa6d5bb750de89505362c"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.803812 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzrwt" event={"ID":"ace45696-b259-49f7-bfd9-8afe2557ac3e","Type":"ContainerStarted","Data":"0b17a3567c82ae855493975810bebedc8e189fc6631bb28e67e239c75520eef1"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.813218 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" event={"ID":"69cbaffe-e087-4b95-9943-d13f4455a667","Type":"ContainerStarted","Data":"f36b7833aa5e7f0d50c890e408bf7d4a0662ba6342003c87a320005629b255bd"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.813259 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" event={"ID":"69cbaffe-e087-4b95-9943-d13f4455a667","Type":"ContainerStarted","Data":"ba720ee940cc5e6fe43e1271eb44ff8d5bf3d868b7008616bf7cd88e9705b9fd"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.813971 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.822109 4761 generic.go:334] "Generic (PLEG): container finished" podID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerID="7d5d534072a9499e74df376fb3dd630c17ef0858bff6f453cc0c171a3bcd99db" exitCode=0 Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.822180 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8klgk" event={"ID":"d4b1310d-3887-4489-bbe0-5c63cd91603b","Type":"ContainerDied","Data":"7d5d534072a9499e74df376fb3dd630c17ef0858bff6f453cc0c171a3bcd99db"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.822212 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8klgk" event={"ID":"d4b1310d-3887-4489-bbe0-5c63cd91603b","Type":"ContainerStarted","Data":"afdb857524a0d1a4abf8957b56f8511a734c47a465cce76fa63859b050ae2b35"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.839249 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" event={"ID":"4b2f038d-4913-4c34-bf43-b97fe1d898d2","Type":"ContainerStarted","Data":"74d14a9cfbc59c6efde921008c58ca81e8ac9734dbcc8e3a075c776701b4d8e3"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.839308 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" event={"ID":"4b2f038d-4913-4c34-bf43-b97fe1d898d2","Type":"ContainerStarted","Data":"7ba32e063662cfd1fcdc98e13e3195822f8ddde6d39d1e1e6f8d2088db74550f"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.840980 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.843497 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" event={"ID":"473ecd8c-4e56-40ac-9444-2d43490c6424","Type":"ContainerStarted","Data":"afb1c01b59eedebf7cd675c015291a324f75150f230ac021a30df9dfdc7a88b6"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.845709 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.860962 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.861320 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.861431 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-catalog-content\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.861499 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-utilities\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.861647 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr6qf\" (UniqueName: \"kubernetes.io/projected/e614b274-38db-4951-8f55-a09c49011cb5-kube-api-access-mr6qf\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.863018 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-catalog-content\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.865054 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-utilities\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.886473 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.907955 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" podStartSLOduration=3.907939312 podStartE2EDuration="3.907939312s" podCreationTimestamp="2026-03-07 07:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:54.906704561 +0000 UTC m=+171.815871036" watchObservedRunningTime="2026-03-07 07:51:54.907939312 +0000 UTC m=+171.817105787" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.914951 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr6qf\" (UniqueName: \"kubernetes.io/projected/e614b274-38db-4951-8f55-a09c49011cb5-kube-api-access-mr6qf\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.921654 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:54 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:54 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:54 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.921716 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.942228 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" podStartSLOduration=94.942207979 podStartE2EDuration="1m34.942207979s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:54.94146241 +0000 UTC m=+171.850628885" watchObservedRunningTime="2026-03-07 07:51:54.942207979 +0000 UTC m=+171.851374454" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.960349 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.966836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.967010 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.967924 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.996491 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.016689 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" podStartSLOduration=3.016673528 podStartE2EDuration="3.016673528s" podCreationTimestamp="2026-03-07 07:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:55.01389138 +0000 UTC m=+171.923057875" watchObservedRunningTime="2026-03-07 07:51:55.016673528 +0000 UTC m=+171.925839993" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.024026 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.024645 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.035117 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.035231 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.050327 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.055943 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wvcd6"] Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.056990 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.079143 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvcd6"] Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.098730 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.172316 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfgsw\" (UniqueName: \"kubernetes.io/projected/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-kube-api-access-bfgsw\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.172653 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.172679 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-utilities\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.172742 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-catalog-content\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.172765 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.274230 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-catalog-content\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.274276 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.274356 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfgsw\" (UniqueName: \"kubernetes.io/projected/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-kube-api-access-bfgsw\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.274380 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-utilities\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.274396 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.274477 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.275123 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-catalog-content\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.275971 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-utilities\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.289919 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.293948 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfgsw\" (UniqueName: \"kubernetes.io/projected/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-kube-api-access-bfgsw\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.360187 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.407682 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 07 07:51:55 crc kubenswrapper[4761]: W0307 07:51:55.423034 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1c3ed8d3_899a_4b6d_a823_fdc635cde091.slice/crio-4407afa2709b44e644a0c3afc6e20cab28eb81bd6d9246a00698db4aedea0a1d WatchSource:0}: Error finding container 4407afa2709b44e644a0c3afc6e20cab28eb81bd6d9246a00698db4aedea0a1d: Status 404 returned error can't find the container with id 4407afa2709b44e644a0c3afc6e20cab28eb81bd6d9246a00698db4aedea0a1d Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.426537 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.426571 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.433828 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.506443 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.506778 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.508968 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.509013 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.574525 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.633091 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.637182 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.638829 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.819512 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.820374 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.822501 4761 patch_prober.go:28] interesting pod/console-f9d7485db-fsrlc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.822760 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fsrlc" podUID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.852913 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zbq9k"] Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.854130 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.861041 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.871699 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zbq9k"] Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.886413 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.904890 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:55 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:55 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:55 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.904956 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.918999 4761 generic.go:334] "Generic (PLEG): container finished" podID="66a6be2c-da25-42c0-a8fa-075b8273bb65" containerID="b26ebf4b31ad9b755874a090e2400d415f6a366f21084cf982eba0cc6f886633" exitCode=0 Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.919188 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" event={"ID":"66a6be2c-da25-42c0-a8fa-075b8273bb65","Type":"ContainerDied","Data":"b26ebf4b31ad9b755874a090e2400d415f6a366f21084cf982eba0cc6f886633"} Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.921287 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.925661 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c3ed8d3-899a-4b6d-a823-fdc635cde091","Type":"ContainerStarted","Data":"4407afa2709b44e644a0c3afc6e20cab28eb81bd6d9246a00698db4aedea0a1d"} Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.927509 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6d96ab8-9ca2-4369-8f40-51360a0c0fef","Type":"ContainerStarted","Data":"ed53acf3b5423b2aa99a8ee1f328bc63150e3acc4fc1ceee50d3702b52c36d92"} Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.966460 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.985944 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-catalog-content\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.986011 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-utilities\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.986045 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9psrq\" (UniqueName: \"kubernetes.io/projected/475b44c2-ce39-4d2c-b475-8a88c37a4d22-kube-api-access-9psrq\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: E0307 07:51:56.019951 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:51:56 crc kubenswrapper[4761]: E0307 07:51:56.048505 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:51:56 crc kubenswrapper[4761]: E0307 07:51:56.073397 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:51:56 crc kubenswrapper[4761]: E0307 07:51:56.073471 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.086915 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9psrq\" (UniqueName: \"kubernetes.io/projected/475b44c2-ce39-4d2c-b475-8a88c37a4d22-kube-api-access-9psrq\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.087345 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-catalog-content\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.087384 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-utilities\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.088989 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-catalog-content\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.089488 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-utilities\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.114849 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9psrq\" (UniqueName: \"kubernetes.io/projected/475b44c2-ce39-4d2c-b475-8a88c37a4d22-kube-api-access-9psrq\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.178943 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.255449 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wkvj9"] Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.258255 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.274692 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkvj9"] Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.290524 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-utilities\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.290590 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-catalog-content\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.290899 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftpss\" (UniqueName: \"kubernetes.io/projected/d222854b-4039-4723-bdb4-2be9768cf9f7-kube-api-access-ftpss\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.368539 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xc9s"] Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.390320 4761 ???:1] "http: TLS handshake error from 192.168.126.11:40070: no serving certificate available for the kubelet" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.391802 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftpss\" (UniqueName: \"kubernetes.io/projected/d222854b-4039-4723-bdb4-2be9768cf9f7-kube-api-access-ftpss\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.391907 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-utilities\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.391971 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-catalog-content\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.392707 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-catalog-content\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.394069 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-utilities\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.424786 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftpss\" (UniqueName: \"kubernetes.io/projected/d222854b-4039-4723-bdb4-2be9768cf9f7-kube-api-access-ftpss\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.542369 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvcd6"] Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.585176 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.793458 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zbq9k"] Mar 07 07:51:56 crc kubenswrapper[4761]: W0307 07:51:56.827306 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod475b44c2_ce39_4d2c_b475_8a88c37a4d22.slice/crio-7cd404336db3278582f3f84c6dc0758504d6802e4eca15bb6c8c6727f6809d2e WatchSource:0}: Error finding container 7cd404336db3278582f3f84c6dc0758504d6802e4eca15bb6c8c6727f6809d2e: Status 404 returned error can't find the container with id 7cd404336db3278582f3f84c6dc0758504d6802e4eca15bb6c8c6727f6809d2e Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.868066 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkvj9"] Mar 07 07:51:56 crc kubenswrapper[4761]: W0307 07:51:56.892311 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd222854b_4039_4723_bdb4_2be9768cf9f7.slice/crio-de23a09e452f2bfb79559f697b53473e1e4027e8b53b6fbf628450ba449d519f WatchSource:0}: Error finding container de23a09e452f2bfb79559f697b53473e1e4027e8b53b6fbf628450ba449d519f: Status 404 returned error can't find the container with id de23a09e452f2bfb79559f697b53473e1e4027e8b53b6fbf628450ba449d519f Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.900377 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:56 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:56 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:56 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.900457 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.940168 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkvj9" event={"ID":"d222854b-4039-4723-bdb4-2be9768cf9f7","Type":"ContainerStarted","Data":"de23a09e452f2bfb79559f697b53473e1e4027e8b53b6fbf628450ba449d519f"} Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.945023 4761 generic.go:334] "Generic (PLEG): container finished" podID="e614b274-38db-4951-8f55-a09c49011cb5" containerID="419e20e6925afae9dc0ba45b444441d41aa2f7cac8e1cd54262a4617cf13bfac" exitCode=0 Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.945082 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xc9s" event={"ID":"e614b274-38db-4951-8f55-a09c49011cb5","Type":"ContainerDied","Data":"419e20e6925afae9dc0ba45b444441d41aa2f7cac8e1cd54262a4617cf13bfac"} Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.945108 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xc9s" event={"ID":"e614b274-38db-4951-8f55-a09c49011cb5","Type":"ContainerStarted","Data":"0f8a13c45f1b2417142f965fdcdde66f49582188f29393329d8613a807a1c1e7"} Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.961402 4761 generic.go:334] "Generic (PLEG): container finished" podID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerID="83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781" exitCode=0 Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.961480 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvcd6" event={"ID":"2cdb750e-2fd2-4e57-b474-f91f874a5e8d","Type":"ContainerDied","Data":"83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781"} Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.961508 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvcd6" event={"ID":"2cdb750e-2fd2-4e57-b474-f91f874a5e8d","Type":"ContainerStarted","Data":"01a0a0986372d1d9f62d984187377283eba6abf44594d70aa40803e57b311878"} Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.967385 4761 generic.go:334] "Generic (PLEG): container finished" podID="1c3ed8d3-899a-4b6d-a823-fdc635cde091" containerID="960a3de60df67b9f9ae1c5b9536fea6484ecd911ee406fb4e8f9f9cec6e467f8" exitCode=0 Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.967444 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c3ed8d3-899a-4b6d-a823-fdc635cde091","Type":"ContainerDied","Data":"960a3de60df67b9f9ae1c5b9536fea6484ecd911ee406fb4e8f9f9cec6e467f8"} Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.984504 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbq9k" event={"ID":"475b44c2-ce39-4d2c-b475-8a88c37a4d22","Type":"ContainerStarted","Data":"7cd404336db3278582f3f84c6dc0758504d6802e4eca15bb6c8c6727f6809d2e"} Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.987018 4761 generic.go:334] "Generic (PLEG): container finished" podID="e6d96ab8-9ca2-4369-8f40-51360a0c0fef" containerID="fe1a778c6cb4566b421895471bd629b672276c4effd4f6c65986a3bea09e6b08" exitCode=0 Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.987280 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6d96ab8-9ca2-4369-8f40-51360a0c0fef","Type":"ContainerDied","Data":"fe1a778c6cb4566b421895471bd629b672276c4effd4f6c65986a3bea09e6b08"} Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.195293 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.305146 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66a6be2c-da25-42c0-a8fa-075b8273bb65-config-volume\") pod \"66a6be2c-da25-42c0-a8fa-075b8273bb65\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.305228 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66a6be2c-da25-42c0-a8fa-075b8273bb65-secret-volume\") pod \"66a6be2c-da25-42c0-a8fa-075b8273bb65\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.305281 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2t2b\" (UniqueName: \"kubernetes.io/projected/66a6be2c-da25-42c0-a8fa-075b8273bb65-kube-api-access-q2t2b\") pod \"66a6be2c-da25-42c0-a8fa-075b8273bb65\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.307115 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a6be2c-da25-42c0-a8fa-075b8273bb65-config-volume" (OuterVolumeSpecName: "config-volume") pod "66a6be2c-da25-42c0-a8fa-075b8273bb65" (UID: "66a6be2c-da25-42c0-a8fa-075b8273bb65"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.314498 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66a6be2c-da25-42c0-a8fa-075b8273bb65-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "66a6be2c-da25-42c0-a8fa-075b8273bb65" (UID: "66a6be2c-da25-42c0-a8fa-075b8273bb65"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.315985 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a6be2c-da25-42c0-a8fa-075b8273bb65-kube-api-access-q2t2b" (OuterVolumeSpecName: "kube-api-access-q2t2b") pod "66a6be2c-da25-42c0-a8fa-075b8273bb65" (UID: "66a6be2c-da25-42c0-a8fa-075b8273bb65"). InnerVolumeSpecName "kube-api-access-q2t2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.387027 4761 ???:1] "http: TLS handshake error from 192.168.126.11:40072: no serving certificate available for the kubelet" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.407510 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66a6be2c-da25-42c0-a8fa-075b8273bb65-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.407545 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66a6be2c-da25-42c0-a8fa-075b8273bb65-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.407555 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2t2b\" (UniqueName: \"kubernetes.io/projected/66a6be2c-da25-42c0-a8fa-075b8273bb65-kube-api-access-q2t2b\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.889688 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:57 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:57 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:57 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.889750 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.025272 4761 generic.go:334] "Generic (PLEG): container finished" podID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerID="3150372c2d99fad85617856cce969ff24fbe8e06307e33cd0c0a4e391026e50b" exitCode=0 Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.025331 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkvj9" event={"ID":"d222854b-4039-4723-bdb4-2be9768cf9f7","Type":"ContainerDied","Data":"3150372c2d99fad85617856cce969ff24fbe8e06307e33cd0c0a4e391026e50b"} Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.070948 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" event={"ID":"66a6be2c-da25-42c0-a8fa-075b8273bb65","Type":"ContainerDied","Data":"321309a55803adc9e4242f9518e5893bb505901074141c78c3bd1a4360ba12ef"} Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.071020 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="321309a55803adc9e4242f9518e5893bb505901074141c78c3bd1a4360ba12ef" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.071209 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.103951 4761 generic.go:334] "Generic (PLEG): container finished" podID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerID="36b36475a04f4bb5788ef4f132601b8eb14578495098f24a65c31ca99151024f" exitCode=0 Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.104699 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbq9k" event={"ID":"475b44c2-ce39-4d2c-b475-8a88c37a4d22","Type":"ContainerDied","Data":"36b36475a04f4bb5788ef4f132601b8eb14578495098f24a65c31ca99151024f"} Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.500025 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.563574 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kubelet-dir\") pod \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.563667 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1c3ed8d3-899a-4b6d-a823-fdc635cde091" (UID: "1c3ed8d3-899a-4b6d-a823-fdc635cde091"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.564020 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kube-api-access\") pod \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.564405 4761 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.572245 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1c3ed8d3-899a-4b6d-a823-fdc635cde091" (UID: "1c3ed8d3-899a-4b6d-a823-fdc635cde091"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.581069 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.665896 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kubelet-dir\") pod \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.665962 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kube-api-access\") pod \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.666064 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e6d96ab8-9ca2-4369-8f40-51360a0c0fef" (UID: "e6d96ab8-9ca2-4369-8f40-51360a0c0fef"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.667239 4761 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.667254 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.670818 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e6d96ab8-9ca2-4369-8f40-51360a0c0fef" (UID: "e6d96ab8-9ca2-4369-8f40-51360a0c0fef"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.768535 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.891176 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:58 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:58 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:58 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.891226 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.125229 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6d96ab8-9ca2-4369-8f40-51360a0c0fef","Type":"ContainerDied","Data":"ed53acf3b5423b2aa99a8ee1f328bc63150e3acc4fc1ceee50d3702b52c36d92"} Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.125268 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed53acf3b5423b2aa99a8ee1f328bc63150e3acc4fc1ceee50d3702b52c36d92" Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.125357 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.130231 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c3ed8d3-899a-4b6d-a823-fdc635cde091","Type":"ContainerDied","Data":"4407afa2709b44e644a0c3afc6e20cab28eb81bd6d9246a00698db4aedea0a1d"} Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.130267 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4407afa2709b44e644a0c3afc6e20cab28eb81bd6d9246a00698db4aedea0a1d" Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.130323 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.890538 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:59 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:59 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:59 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.890619 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.114945 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125067 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547832-2fpg8"] Mar 07 07:52:00 crc kubenswrapper[4761]: E0307 07:52:00.125265 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a6be2c-da25-42c0-a8fa-075b8273bb65" containerName="collect-profiles" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125275 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a6be2c-da25-42c0-a8fa-075b8273bb65" containerName="collect-profiles" Mar 07 07:52:00 crc kubenswrapper[4761]: E0307 07:52:00.125285 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d96ab8-9ca2-4369-8f40-51360a0c0fef" containerName="pruner" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125291 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d96ab8-9ca2-4369-8f40-51360a0c0fef" containerName="pruner" Mar 07 07:52:00 crc kubenswrapper[4761]: E0307 07:52:00.125302 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3ed8d3-899a-4b6d-a823-fdc635cde091" containerName="pruner" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125308 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3ed8d3-899a-4b6d-a823-fdc635cde091" containerName="pruner" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125469 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a6be2c-da25-42c0-a8fa-075b8273bb65" containerName="collect-profiles" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125482 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d96ab8-9ca2-4369-8f40-51360a0c0fef" containerName="pruner" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125493 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3ed8d3-899a-4b6d-a823-fdc635cde091" containerName="pruner" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125836 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.126090 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.130330 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.130434 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.131538 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.134671 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547832-2fpg8"] Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.216284 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slg9r\" (UniqueName: \"kubernetes.io/projected/083b3718-3e45-40ca-8adf-5f417eeda74d-kube-api-access-slg9r\") pod \"auto-csr-approver-29547832-2fpg8\" (UID: \"083b3718-3e45-40ca-8adf-5f417eeda74d\") " pod="openshift-infra/auto-csr-approver-29547832-2fpg8" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.230521 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.320399 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slg9r\" (UniqueName: \"kubernetes.io/projected/083b3718-3e45-40ca-8adf-5f417eeda74d-kube-api-access-slg9r\") pod \"auto-csr-approver-29547832-2fpg8\" (UID: \"083b3718-3e45-40ca-8adf-5f417eeda74d\") " pod="openshift-infra/auto-csr-approver-29547832-2fpg8" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.360293 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slg9r\" (UniqueName: \"kubernetes.io/projected/083b3718-3e45-40ca-8adf-5f417eeda74d-kube-api-access-slg9r\") pod \"auto-csr-approver-29547832-2fpg8\" (UID: \"083b3718-3e45-40ca-8adf-5f417eeda74d\") " pod="openshift-infra/auto-csr-approver-29547832-2fpg8" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.474451 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.609269 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9pvvx"] Mar 07 07:52:00 crc kubenswrapper[4761]: W0307 07:52:00.613730 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd879fe59_4c7f_4af7_8c06_f3462f8e07d9.slice/crio-e80b328ed5fafdf0145fe260c49202293479250434f37422e2e6785e9fedf38a WatchSource:0}: Error finding container e80b328ed5fafdf0145fe260c49202293479250434f37422e2e6785e9fedf38a: Status 404 returned error can't find the container with id e80b328ed5fafdf0145fe260c49202293479250434f37422e2e6785e9fedf38a Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.890591 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:52:00 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:52:00 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:52:00 crc kubenswrapper[4761]: healthz check failed Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.890916 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.985034 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547832-2fpg8"] Mar 07 07:52:01 crc kubenswrapper[4761]: I0307 07:52:01.023780 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-c9px5" Mar 07 07:52:01 crc kubenswrapper[4761]: I0307 07:52:01.151742 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" event={"ID":"083b3718-3e45-40ca-8adf-5f417eeda74d","Type":"ContainerStarted","Data":"2ee6a6d892b38761648f5532063c1b38a1d3cfa3b95bf0d700f8031b32a71582"} Mar 07 07:52:01 crc kubenswrapper[4761]: I0307 07:52:01.153371 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" event={"ID":"d879fe59-4c7f-4af7-8c06-f3462f8e07d9","Type":"ContainerStarted","Data":"e80b328ed5fafdf0145fe260c49202293479250434f37422e2e6785e9fedf38a"} Mar 07 07:52:01 crc kubenswrapper[4761]: I0307 07:52:01.534946 4761 ???:1] "http: TLS handshake error from 192.168.126.11:40082: no serving certificate available for the kubelet" Mar 07 07:52:01 crc kubenswrapper[4761]: I0307 07:52:01.890481 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:52:01 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:52:01 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:52:01 crc kubenswrapper[4761]: healthz check failed Mar 07 07:52:01 crc kubenswrapper[4761]: I0307 07:52:01.890556 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:52:02 crc kubenswrapper[4761]: I0307 07:52:02.169628 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" event={"ID":"d879fe59-4c7f-4af7-8c06-f3462f8e07d9","Type":"ContainerStarted","Data":"33fab6e08dd590dec2a6c6a31d61f09741e09226f333777b78f6b62c8014800b"} Mar 07 07:52:02 crc kubenswrapper[4761]: I0307 07:52:02.888826 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:52:02 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:52:02 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:52:02 crc kubenswrapper[4761]: healthz check failed Mar 07 07:52:02 crc kubenswrapper[4761]: I0307 07:52:02.888879 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:52:03 crc kubenswrapper[4761]: I0307 07:52:03.888679 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:52:03 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:52:03 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:52:03 crc kubenswrapper[4761]: healthz check failed Mar 07 07:52:03 crc kubenswrapper[4761]: I0307 07:52:03.888935 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:52:04 crc kubenswrapper[4761]: I0307 07:52:04.888705 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:52:04 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:52:04 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:52:04 crc kubenswrapper[4761]: healthz check failed Mar 07 07:52:04 crc kubenswrapper[4761]: I0307 07:52:04.889087 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.506913 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.507213 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.508549 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.508616 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.816502 4761 patch_prober.go:28] interesting pod/console-f9d7485db-fsrlc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.816555 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fsrlc" podUID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.888821 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:52:05 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:52:05 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:52:05 crc kubenswrapper[4761]: healthz check failed Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.888892 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:52:06 crc kubenswrapper[4761]: E0307 07:52:06.010106 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:06 crc kubenswrapper[4761]: E0307 07:52:06.011764 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:06 crc kubenswrapper[4761]: E0307 07:52:06.015215 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:06 crc kubenswrapper[4761]: E0307 07:52:06.015272 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:52:06 crc kubenswrapper[4761]: I0307 07:52:06.889147 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:52:06 crc kubenswrapper[4761]: I0307 07:52:06.892146 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:52:08 crc kubenswrapper[4761]: I0307 07:52:08.202846 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" event={"ID":"d879fe59-4c7f-4af7-8c06-f3462f8e07d9","Type":"ContainerStarted","Data":"9b55ba3fa21ce454165d79494b473ae5752c35045698d3ff997aa4d1b25686ed"} Mar 07 07:52:09 crc kubenswrapper[4761]: I0307 07:52:09.227263 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-9pvvx" podStartSLOduration=109.227248709 podStartE2EDuration="1m49.227248709s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:52:09.226941422 +0000 UTC m=+186.136107907" watchObservedRunningTime="2026-03-07 07:52:09.227248709 +0000 UTC m=+186.136415184" Mar 07 07:52:11 crc kubenswrapper[4761]: I0307 07:52:11.185827 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7757844dd9-dwvqd"] Mar 07 07:52:11 crc kubenswrapper[4761]: I0307 07:52:11.186369 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerName="controller-manager" containerID="cri-o://74d14a9cfbc59c6efde921008c58ca81e8ac9734dbcc8e3a075c776701b4d8e3" gracePeriod=30 Mar 07 07:52:11 crc kubenswrapper[4761]: I0307 07:52:11.244955 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2"] Mar 07 07:52:11 crc kubenswrapper[4761]: I0307 07:52:11.245143 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" containerName="route-controller-manager" containerID="cri-o://f36b7833aa5e7f0d50c890e408bf7d4a0662ba6342003c87a320005629b255bd" gracePeriod=30 Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.043200 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.229362 4761 generic.go:334] "Generic (PLEG): container finished" podID="69cbaffe-e087-4b95-9943-d13f4455a667" containerID="f36b7833aa5e7f0d50c890e408bf7d4a0662ba6342003c87a320005629b255bd" exitCode=0 Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.229440 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" event={"ID":"69cbaffe-e087-4b95-9943-d13f4455a667","Type":"ContainerDied","Data":"f36b7833aa5e7f0d50c890e408bf7d4a0662ba6342003c87a320005629b255bd"} Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.231320 4761 generic.go:334] "Generic (PLEG): container finished" podID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerID="74d14a9cfbc59c6efde921008c58ca81e8ac9734dbcc8e3a075c776701b4d8e3" exitCode=0 Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.231352 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" event={"ID":"4b2f038d-4913-4c34-bf43-b97fe1d898d2","Type":"ContainerDied","Data":"74d14a9cfbc59c6efde921008c58ca81e8ac9734dbcc8e3a075c776701b4d8e3"} Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.722674 4761 patch_prober.go:28] interesting pod/route-controller-manager-55f558d7cd-2lwj2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" start-of-body= Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.722776 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.735194 4761 patch_prober.go:28] interesting pod/controller-manager-7757844dd9-dwvqd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" start-of-body= Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.735283 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" Mar 07 07:52:15 crc kubenswrapper[4761]: I0307 07:52:15.514783 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 07:52:15 crc kubenswrapper[4761]: I0307 07:52:15.957604 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:52:15 crc kubenswrapper[4761]: I0307 07:52:15.960699 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:52:16 crc kubenswrapper[4761]: E0307 07:52:16.015923 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:16 crc kubenswrapper[4761]: E0307 07:52:16.020108 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:16 crc kubenswrapper[4761]: E0307 07:52:16.021881 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:16 crc kubenswrapper[4761]: E0307 07:52:16.021938 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:52:22 crc kubenswrapper[4761]: I0307 07:52:22.038080 4761 ???:1] "http: TLS handshake error from 192.168.126.11:52168: no serving certificate available for the kubelet" Mar 07 07:52:23 crc kubenswrapper[4761]: I0307 07:52:23.722411 4761 patch_prober.go:28] interesting pod/route-controller-manager-55f558d7cd-2lwj2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" start-of-body= Mar 07 07:52:23 crc kubenswrapper[4761]: I0307 07:52:23.722505 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" Mar 07 07:52:23 crc kubenswrapper[4761]: I0307 07:52:23.735574 4761 patch_prober.go:28] interesting pod/controller-manager-7757844dd9-dwvqd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" start-of-body= Mar 07 07:52:23 crc kubenswrapper[4761]: I0307 07:52:23.735771 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" Mar 07 07:52:25 crc kubenswrapper[4761]: I0307 07:52:25.969596 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:52:26 crc kubenswrapper[4761]: E0307 07:52:26.010644 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:26 crc kubenswrapper[4761]: E0307 07:52:26.013980 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:26 crc kubenswrapper[4761]: E0307 07:52:26.020395 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:26 crc kubenswrapper[4761]: E0307 07:52:26.020473 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:52:26 crc kubenswrapper[4761]: I0307 07:52:26.933754 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 07 07:52:26 crc kubenswrapper[4761]: I0307 07:52:26.934597 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:26 crc kubenswrapper[4761]: I0307 07:52:26.937147 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 07 07:52:26 crc kubenswrapper[4761]: I0307 07:52:26.937466 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 07 07:52:26 crc kubenswrapper[4761]: I0307 07:52:26.945900 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 07 07:52:27 crc kubenswrapper[4761]: I0307 07:52:27.072615 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/732fe657-405c-446a-bd53-a7ac3671531c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:27 crc kubenswrapper[4761]: I0307 07:52:27.072678 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/732fe657-405c-446a-bd53-a7ac3671531c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:27 crc kubenswrapper[4761]: I0307 07:52:27.173738 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/732fe657-405c-446a-bd53-a7ac3671531c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:27 crc kubenswrapper[4761]: I0307 07:52:27.173864 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/732fe657-405c-446a-bd53-a7ac3671531c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:27 crc kubenswrapper[4761]: I0307 07:52:27.173844 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/732fe657-405c-446a-bd53-a7ac3671531c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:27 crc kubenswrapper[4761]: I0307 07:52:27.201606 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/732fe657-405c-446a-bd53-a7ac3671531c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:27 crc kubenswrapper[4761]: I0307 07:52:27.269764 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:29 crc kubenswrapper[4761]: E0307 07:52:29.880885 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 07 07:52:29 crc kubenswrapper[4761]: E0307 07:52:29.881048 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr6qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2xc9s_openshift-marketplace(e614b274-38db-4951-8f55-a09c49011cb5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 07:52:29 crc kubenswrapper[4761]: E0307 07:52:29.882363 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2xc9s" podUID="e614b274-38db-4951-8f55-a09c49011cb5" Mar 07 07:52:29 crc kubenswrapper[4761]: E0307 07:52:29.970352 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 07 07:52:29 crc kubenswrapper[4761]: E0307 07:52:29.970504 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjlrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ztv97_openshift-marketplace(af0bdacc-ab60-43aa-adf2-86894b0896e3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 07:52:29 crc kubenswrapper[4761]: E0307 07:52:29.971684 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ztv97" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" Mar 07 07:52:30 crc kubenswrapper[4761]: E0307 07:52:30.488753 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2xc9s" podUID="e614b274-38db-4951-8f55-a09c49011cb5" Mar 07 07:52:30 crc kubenswrapper[4761]: E0307 07:52:30.488969 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ztv97" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" Mar 07 07:52:31 crc kubenswrapper[4761]: E0307 07:52:31.624066 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 07 07:52:31 crc kubenswrapper[4761]: E0307 07:52:31.624296 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wg7qd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8klgk_openshift-marketplace(d4b1310d-3887-4489-bbe0-5c63cd91603b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 07:52:31 crc kubenswrapper[4761]: E0307 07:52:31.625552 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8klgk" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.348559 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-cm8bz_f7a57ac7-fb31-4740-a91c-79947bbdb195/kube-multus-additional-cni-plugins/0.log" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.348605 4761 generic.go:334] "Generic (PLEG): container finished" podID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" exitCode=137 Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.348758 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" event={"ID":"f7a57ac7-fb31-4740-a91c-79947bbdb195","Type":"ContainerDied","Data":"c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d"} Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.736309 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.739790 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.746631 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.863966 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-var-lock\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.864035 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.864353 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88f12d9b-cb82-4690-be2c-35d91899a86a-kube-api-access\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.965235 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-var-lock\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.965313 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.965436 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-var-lock\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.965538 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88f12d9b-cb82-4690-be2c-35d91899a86a-kube-api-access\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.965583 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:33 crc kubenswrapper[4761]: I0307 07:52:33.000780 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88f12d9b-cb82-4690-be2c-35d91899a86a-kube-api-access\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:33 crc kubenswrapper[4761]: I0307 07:52:33.072517 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:34 crc kubenswrapper[4761]: I0307 07:52:34.722849 4761 patch_prober.go:28] interesting pod/route-controller-manager-55f558d7cd-2lwj2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.48:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 07:52:34 crc kubenswrapper[4761]: I0307 07:52:34.723131 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.48:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 07:52:34 crc kubenswrapper[4761]: I0307 07:52:34.749928 4761 patch_prober.go:28] interesting pod/controller-manager-7757844dd9-dwvqd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 07:52:34 crc kubenswrapper[4761]: I0307 07:52:34.749961 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 07:52:36 crc kubenswrapper[4761]: E0307 07:52:36.006640 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d is running failed: container process not found" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:36 crc kubenswrapper[4761]: E0307 07:52:36.007888 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d is running failed: container process not found" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:36 crc kubenswrapper[4761]: E0307 07:52:36.008507 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d is running failed: container process not found" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:36 crc kubenswrapper[4761]: E0307 07:52:36.008539 4761 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:52:37 crc kubenswrapper[4761]: E0307 07:52:37.198472 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8klgk" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.273790 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.322814 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86d9ccbc48-c2429"] Mar 07 07:52:37 crc kubenswrapper[4761]: E0307 07:52:37.323116 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerName="controller-manager" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.323128 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerName="controller-manager" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.323224 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerName="controller-manager" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.323643 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.326181 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86d9ccbc48-c2429"] Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.342564 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/852ec1df-3658-47ff-9e91-98c74a6e956a-serving-cert\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.342627 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmbqk\" (UniqueName: \"kubernetes.io/projected/852ec1df-3658-47ff-9e91-98c74a6e956a-kube-api-access-dmbqk\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.342761 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-client-ca\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.342840 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-config\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.342932 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-proxy-ca-bundles\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.376880 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" event={"ID":"4b2f038d-4913-4c34-bf43-b97fe1d898d2","Type":"ContainerDied","Data":"7ba32e063662cfd1fcdc98e13e3195822f8ddde6d39d1e1e6f8d2088db74550f"} Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.376930 4761 scope.go:117] "RemoveContainer" containerID="74d14a9cfbc59c6efde921008c58ca81e8ac9734dbcc8e3a075c776701b4d8e3" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.376967 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.443949 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-proxy-ca-bundles\") pod \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.444308 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-config\") pod \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.444346 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7p22\" (UniqueName: \"kubernetes.io/projected/4b2f038d-4913-4c34-bf43-b97fe1d898d2-kube-api-access-q7p22\") pod \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.444412 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b2f038d-4913-4c34-bf43-b97fe1d898d2-serving-cert\") pod \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.444435 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-client-ca\") pod \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.444663 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-config\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.444791 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4b2f038d-4913-4c34-bf43-b97fe1d898d2" (UID: "4b2f038d-4913-4c34-bf43-b97fe1d898d2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.445027 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-client-ca" (OuterVolumeSpecName: "client-ca") pod "4b2f038d-4913-4c34-bf43-b97fe1d898d2" (UID: "4b2f038d-4913-4c34-bf43-b97fe1d898d2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.445068 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-config" (OuterVolumeSpecName: "config") pod "4b2f038d-4913-4c34-bf43-b97fe1d898d2" (UID: "4b2f038d-4913-4c34-bf43-b97fe1d898d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.445972 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-proxy-ca-bundles\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.445993 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-config\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.446027 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/852ec1df-3658-47ff-9e91-98c74a6e956a-serving-cert\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.446106 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmbqk\" (UniqueName: \"kubernetes.io/projected/852ec1df-3658-47ff-9e91-98c74a6e956a-kube-api-access-dmbqk\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.446195 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-client-ca\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.446246 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.446260 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.446270 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.447055 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-client-ca\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.447070 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-proxy-ca-bundles\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.449914 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b2f038d-4913-4c34-bf43-b97fe1d898d2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4b2f038d-4913-4c34-bf43-b97fe1d898d2" (UID: "4b2f038d-4913-4c34-bf43-b97fe1d898d2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.450038 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b2f038d-4913-4c34-bf43-b97fe1d898d2-kube-api-access-q7p22" (OuterVolumeSpecName: "kube-api-access-q7p22") pod "4b2f038d-4913-4c34-bf43-b97fe1d898d2" (UID: "4b2f038d-4913-4c34-bf43-b97fe1d898d2"). InnerVolumeSpecName "kube-api-access-q7p22". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.454268 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/852ec1df-3658-47ff-9e91-98c74a6e956a-serving-cert\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.460596 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmbqk\" (UniqueName: \"kubernetes.io/projected/852ec1df-3658-47ff-9e91-98c74a6e956a-kube-api-access-dmbqk\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.547727 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b2f038d-4913-4c34-bf43-b97fe1d898d2-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.547763 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7p22\" (UniqueName: \"kubernetes.io/projected/4b2f038d-4913-4c34-bf43-b97fe1d898d2-kube-api-access-q7p22\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.659853 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.704531 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7757844dd9-dwvqd"] Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.712576 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7757844dd9-dwvqd"] Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.154950 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.154996 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.156921 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.156927 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.166204 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.169102 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.256186 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.256325 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.257482 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.268390 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.280954 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.281205 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.432152 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.451104 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.462811 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:52:39 crc kubenswrapper[4761]: I0307 07:52:39.715852 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" path="/var/lib/kubelet/pods/4b2f038d-4913-4c34-bf43-b97fe1d898d2/volumes" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.084280 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.084418 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftpss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wkvj9_openshift-marketplace(d222854b-4039-4723-bdb4-2be9768cf9f7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.087007 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wkvj9" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.087746 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.116048 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-config\") pod \"69cbaffe-e087-4b95-9943-d13f4455a667\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.116110 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-client-ca\") pod \"69cbaffe-e087-4b95-9943-d13f4455a667\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.116153 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69cbaffe-e087-4b95-9943-d13f4455a667-serving-cert\") pod \"69cbaffe-e087-4b95-9943-d13f4455a667\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.116188 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9467\" (UniqueName: \"kubernetes.io/projected/69cbaffe-e087-4b95-9943-d13f4455a667-kube-api-access-x9467\") pod \"69cbaffe-e087-4b95-9943-d13f4455a667\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.117911 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-client-ca" (OuterVolumeSpecName: "client-ca") pod "69cbaffe-e087-4b95-9943-d13f4455a667" (UID: "69cbaffe-e087-4b95-9943-d13f4455a667"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.118525 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-config" (OuterVolumeSpecName: "config") pod "69cbaffe-e087-4b95-9943-d13f4455a667" (UID: "69cbaffe-e087-4b95-9943-d13f4455a667"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.119009 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866789466c-f86q8"] Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.119412 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" containerName="route-controller-manager" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.119430 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" containerName="route-controller-manager" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.119530 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" containerName="route-controller-manager" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.119922 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.120594 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.120813 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9psrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zbq9k_openshift-marketplace(475b44c2-ce39-4d2c-b475-8a88c37a4d22): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.122778 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zbq9k" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.126871 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.127015 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfgsw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wvcd6_openshift-marketplace(2cdb750e-2fd2-4e57-b474-f91f874a5e8d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.127333 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69cbaffe-e087-4b95-9943-d13f4455a667-kube-api-access-x9467" (OuterVolumeSpecName: "kube-api-access-x9467") pod "69cbaffe-e087-4b95-9943-d13f4455a667" (UID: "69cbaffe-e087-4b95-9943-d13f4455a667"). InnerVolumeSpecName "kube-api-access-x9467". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.128292 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wvcd6" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.128464 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69cbaffe-e087-4b95-9943-d13f4455a667-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "69cbaffe-e087-4b95-9943-d13f4455a667" (UID: "69cbaffe-e087-4b95-9943-d13f4455a667"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.131202 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866789466c-f86q8"] Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.217248 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.217278 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.217289 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69cbaffe-e087-4b95-9943-d13f4455a667-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.217298 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9467\" (UniqueName: \"kubernetes.io/projected/69cbaffe-e087-4b95-9943-d13f4455a667-kube-api-access-x9467\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.318324 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-config\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.318808 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd2zx\" (UniqueName: \"kubernetes.io/projected/d5b2d79f-d9de-4fd1-b966-db43755f248e-kube-api-access-vd2zx\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.318910 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-client-ca\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.319058 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b2d79f-d9de-4fd1-b966-db43755f248e-serving-cert\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.409506 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" event={"ID":"69cbaffe-e087-4b95-9943-d13f4455a667","Type":"ContainerDied","Data":"ba720ee940cc5e6fe43e1271eb44ff8d5bf3d868b7008616bf7cd88e9705b9fd"} Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.409559 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.412481 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-cm8bz_f7a57ac7-fb31-4740-a91c-79947bbdb195/kube-multus-additional-cni-plugins/0.log" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.412589 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" event={"ID":"f7a57ac7-fb31-4740-a91c-79947bbdb195","Type":"ContainerDied","Data":"10fffd5195b9e393f3834032440a56b1e21df8b250bef07886ea2a129d17fa61"} Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.412632 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10fffd5195b9e393f3834032440a56b1e21df8b250bef07886ea2a129d17fa61" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.415263 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wkvj9" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.415304 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wvcd6" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.415471 4761 scope.go:117] "RemoveContainer" containerID="f36b7833aa5e7f0d50c890e408bf7d4a0662ba6342003c87a320005629b255bd" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.420589 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-config\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.420652 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd2zx\" (UniqueName: \"kubernetes.io/projected/d5b2d79f-d9de-4fd1-b966-db43755f248e-kube-api-access-vd2zx\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.420690 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-client-ca\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.420848 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b2d79f-d9de-4fd1-b966-db43755f248e-serving-cert\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.421763 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-client-ca\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.422039 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-config\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.427699 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b2d79f-d9de-4fd1-b966-db43755f248e-serving-cert\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.431886 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zbq9k" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.437291 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd2zx\" (UniqueName: \"kubernetes.io/projected/d5b2d79f-d9de-4fd1-b966-db43755f248e-kube-api-access-vd2zx\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.465659 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-cm8bz_f7a57ac7-fb31-4740-a91c-79947bbdb195/kube-multus-additional-cni-plugins/0.log" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.466043 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.466324 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.477615 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2"] Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.481172 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2"] Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.623285 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwcj7\" (UniqueName: \"kubernetes.io/projected/f7a57ac7-fb31-4740-a91c-79947bbdb195-kube-api-access-nwcj7\") pod \"f7a57ac7-fb31-4740-a91c-79947bbdb195\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.623348 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7a57ac7-fb31-4740-a91c-79947bbdb195-tuning-conf-dir\") pod \"f7a57ac7-fb31-4740-a91c-79947bbdb195\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.623395 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f7a57ac7-fb31-4740-a91c-79947bbdb195-ready\") pod \"f7a57ac7-fb31-4740-a91c-79947bbdb195\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.623436 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7a57ac7-fb31-4740-a91c-79947bbdb195-cni-sysctl-allowlist\") pod \"f7a57ac7-fb31-4740-a91c-79947bbdb195\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.624561 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a57ac7-fb31-4740-a91c-79947bbdb195-ready" (OuterVolumeSpecName: "ready") pod "f7a57ac7-fb31-4740-a91c-79947bbdb195" (UID: "f7a57ac7-fb31-4740-a91c-79947bbdb195"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.624282 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a57ac7-fb31-4740-a91c-79947bbdb195-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "f7a57ac7-fb31-4740-a91c-79947bbdb195" (UID: "f7a57ac7-fb31-4740-a91c-79947bbdb195"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.625242 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a57ac7-fb31-4740-a91c-79947bbdb195-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "f7a57ac7-fb31-4740-a91c-79947bbdb195" (UID: "f7a57ac7-fb31-4740-a91c-79947bbdb195"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.627319 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a57ac7-fb31-4740-a91c-79947bbdb195-kube-api-access-nwcj7" (OuterVolumeSpecName: "kube-api-access-nwcj7") pod "f7a57ac7-fb31-4740-a91c-79947bbdb195" (UID: "f7a57ac7-fb31-4740-a91c-79947bbdb195"). InnerVolumeSpecName "kube-api-access-nwcj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.711641 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" path="/var/lib/kubelet/pods/69cbaffe-e087-4b95-9943-d13f4455a667/volumes" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.724406 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwcj7\" (UniqueName: \"kubernetes.io/projected/f7a57ac7-fb31-4740-a91c-79947bbdb195-kube-api-access-nwcj7\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.724439 4761 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7a57ac7-fb31-4740-a91c-79947bbdb195-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.724450 4761 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f7a57ac7-fb31-4740-a91c-79947bbdb195-ready\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.724461 4761 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7a57ac7-fb31-4740-a91c-79947bbdb195-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:42 crc kubenswrapper[4761]: E0307 07:52:42.234791 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 07 07:52:42 crc kubenswrapper[4761]: E0307 07:52:42.235183 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:52:42 crc kubenswrapper[4761]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 07 07:52:42 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-slg9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29547832-2fpg8_openshift-infra(083b3718-3e45-40ca-8adf-5f417eeda74d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 07 07:52:42 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:52:42 crc kubenswrapper[4761]: E0307 07:52:42.236791 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" podUID="083b3718-3e45-40ca-8adf-5f417eeda74d" Mar 07 07:52:42 crc kubenswrapper[4761]: I0307 07:52:42.434589 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:52:42 crc kubenswrapper[4761]: E0307 07:52:42.441908 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" podUID="083b3718-3e45-40ca-8adf-5f417eeda74d" Mar 07 07:52:42 crc kubenswrapper[4761]: I0307 07:52:42.458322 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cm8bz"] Mar 07 07:52:42 crc kubenswrapper[4761]: I0307 07:52:42.460954 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cm8bz"] Mar 07 07:52:42 crc kubenswrapper[4761]: I0307 07:52:42.661561 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86d9ccbc48-c2429"] Mar 07 07:52:42 crc kubenswrapper[4761]: I0307 07:52:42.669071 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 07 07:52:42 crc kubenswrapper[4761]: W0307 07:52:42.671086 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod852ec1df_3658_47ff_9e91_98c74a6e956a.slice/crio-ec9118a124662d15a6dd684ddc4478555ab7ac5701a1e5221e4368887332b26d WatchSource:0}: Error finding container ec9118a124662d15a6dd684ddc4478555ab7ac5701a1e5221e4368887332b26d: Status 404 returned error can't find the container with id ec9118a124662d15a6dd684ddc4478555ab7ac5701a1e5221e4368887332b26d Mar 07 07:52:42 crc kubenswrapper[4761]: I0307 07:52:42.884051 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 07 07:52:42 crc kubenswrapper[4761]: W0307 07:52:42.892120 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod88f12d9b_cb82_4690_be2c_35d91899a86a.slice/crio-a5cef952adcb249cb54e99547d65e9b16d3d92a191a324f0f82c885682808b37 WatchSource:0}: Error finding container a5cef952adcb249cb54e99547d65e9b16d3d92a191a324f0f82c885682808b37: Status 404 returned error can't find the container with id a5cef952adcb249cb54e99547d65e9b16d3d92a191a324f0f82c885682808b37 Mar 07 07:52:42 crc kubenswrapper[4761]: W0307 07:52:42.893187 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-3bbae6d92aaedd31974c77d3c722bc861cdd3c7e4f78c5b7a91145eeec144df6 WatchSource:0}: Error finding container 3bbae6d92aaedd31974c77d3c722bc861cdd3c7e4f78c5b7a91145eeec144df6: Status 404 returned error can't find the container with id 3bbae6d92aaedd31974c77d3c722bc861cdd3c7e4f78c5b7a91145eeec144df6 Mar 07 07:52:42 crc kubenswrapper[4761]: I0307 07:52:42.919975 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866789466c-f86q8"] Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.441086 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"732fe657-405c-446a-bd53-a7ac3671531c","Type":"ContainerStarted","Data":"4e8c239d28b768d285fe6efe721f335f3b3db9ec34858dfaf6e3544e6cc5f895"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.441670 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"732fe657-405c-446a-bd53-a7ac3671531c","Type":"ContainerStarted","Data":"edb7c0fdcf4076608c4f9675ddf4d8ed11c44ec842f839517e4c094437c561a0"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.444928 4761 generic.go:334] "Generic (PLEG): container finished" podID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerID="a11ffafbe0f10f231010abe8a9bda1bc993b742ccaaef5cf882d03430e612a39" exitCode=0 Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.444980 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzrwt" event={"ID":"ace45696-b259-49f7-bfd9-8afe2557ac3e","Type":"ContainerDied","Data":"a11ffafbe0f10f231010abe8a9bda1bc993b742ccaaef5cf882d03430e612a39"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.449294 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"88f12d9b-cb82-4690-be2c-35d91899a86a","Type":"ContainerStarted","Data":"f8da5910371e18b2edd6886052f3fe7116d73f39cd035e121ecb5be718578e8f"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.449326 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"88f12d9b-cb82-4690-be2c-35d91899a86a","Type":"ContainerStarted","Data":"a5cef952adcb249cb54e99547d65e9b16d3d92a191a324f0f82c885682808b37"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.451605 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"204e68807f40ecc87452a4d08af8b7639ee8873193b599eacffff5bc47d0dd8e"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.451633 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bde964060ded38866e1ff81beca83d42340ad2e2fbc052d127d16ed9a2781648"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.451837 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.453347 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" event={"ID":"852ec1df-3658-47ff-9e91-98c74a6e956a","Type":"ContainerStarted","Data":"6f5475ff350dfc8268fd99fe10d23cb4ff01166d466b4a60bf968bd76f426dff"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.453377 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" event={"ID":"852ec1df-3658-47ff-9e91-98c74a6e956a","Type":"ContainerStarted","Data":"ec9118a124662d15a6dd684ddc4478555ab7ac5701a1e5221e4368887332b26d"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.453555 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.454988 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" event={"ID":"d5b2d79f-d9de-4fd1-b966-db43755f248e","Type":"ContainerStarted","Data":"0cf0be69ddfc1e37d47ec4e905eb42fb011c8fd195d28d4b8e73b47761c8168b"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.455036 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" event={"ID":"d5b2d79f-d9de-4fd1-b966-db43755f248e","Type":"ContainerStarted","Data":"6cd1a429b5bda47a9e6cd7a814da696021eef68e05cbc96a866c3193b6be8254"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.455163 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.456625 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1a8e1aa3c61b4b8d5370b805230e7996ce32588c0b95eaa3627e0e8c01b99784"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.456655 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9950339f43d2cec5d1aef15997d69f1f0b0ee6166e0b549bc81fe997ebd432e9"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.457964 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"108deedada05728ad876ff972c1f015649ce4f1d48243a4f3b65e497705847a3"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.458000 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3bbae6d92aaedd31974c77d3c722bc861cdd3c7e4f78c5b7a91145eeec144df6"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.458047 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.466487 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=17.466470699 podStartE2EDuration="17.466470699s" podCreationTimestamp="2026-03-07 07:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:52:43.459129966 +0000 UTC m=+220.368296441" watchObservedRunningTime="2026-03-07 07:52:43.466470699 +0000 UTC m=+220.375637174" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.467745 4761 generic.go:334] "Generic (PLEG): container finished" podID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerID="d06af30b8ef444210e60a8b34a70a405dd866ea5ad35ffb9eb965e728d7b06de" exitCode=0 Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.467779 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phm95" event={"ID":"4601b717-e620-42a5-9f21-3b6fea1e71ff","Type":"ContainerDied","Data":"d06af30b8ef444210e60a8b34a70a405dd866ea5ad35ffb9eb965e728d7b06de"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.523498 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" podStartSLOduration=12.523476637 podStartE2EDuration="12.523476637s" podCreationTimestamp="2026-03-07 07:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:52:43.498989307 +0000 UTC m=+220.408155782" watchObservedRunningTime="2026-03-07 07:52:43.523476637 +0000 UTC m=+220.432643112" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.540442 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=11.540425588 podStartE2EDuration="11.540425588s" podCreationTimestamp="2026-03-07 07:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:52:43.538850399 +0000 UTC m=+220.448016874" watchObservedRunningTime="2026-03-07 07:52:43.540425588 +0000 UTC m=+220.449592063" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.588379 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" podStartSLOduration=12.588360251 podStartE2EDuration="12.588360251s" podCreationTimestamp="2026-03-07 07:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:52:43.572093596 +0000 UTC m=+220.481260071" watchObservedRunningTime="2026-03-07 07:52:43.588360251 +0000 UTC m=+220.497526726" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.714254 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" path="/var/lib/kubelet/pods/f7a57ac7-fb31-4740-a91c-79947bbdb195/volumes" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.743298 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:44 crc kubenswrapper[4761]: I0307 07:52:44.476574 4761 generic.go:334] "Generic (PLEG): container finished" podID="732fe657-405c-446a-bd53-a7ac3671531c" containerID="4e8c239d28b768d285fe6efe721f335f3b3db9ec34858dfaf6e3544e6cc5f895" exitCode=0 Mar 07 07:52:44 crc kubenswrapper[4761]: I0307 07:52:44.476951 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"732fe657-405c-446a-bd53-a7ac3671531c","Type":"ContainerDied","Data":"4e8c239d28b768d285fe6efe721f335f3b3db9ec34858dfaf6e3544e6cc5f895"} Mar 07 07:52:44 crc kubenswrapper[4761]: I0307 07:52:44.481128 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzrwt" event={"ID":"ace45696-b259-49f7-bfd9-8afe2557ac3e","Type":"ContainerStarted","Data":"23595ffc23bf9b6077f3c36141fd086dc7aa8bf2ee7f31f3a4967421f2665e8f"} Mar 07 07:52:44 crc kubenswrapper[4761]: I0307 07:52:44.485623 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phm95" event={"ID":"4601b717-e620-42a5-9f21-3b6fea1e71ff","Type":"ContainerStarted","Data":"64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd"} Mar 07 07:52:44 crc kubenswrapper[4761]: I0307 07:52:44.545308 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jzrwt" podStartSLOduration=2.447893056 podStartE2EDuration="51.545285664s" podCreationTimestamp="2026-03-07 07:51:53 +0000 UTC" firstStartedPulling="2026-03-07 07:51:54.81328387 +0000 UTC m=+171.722450335" lastFinishedPulling="2026-03-07 07:52:43.910676468 +0000 UTC m=+220.819842943" observedRunningTime="2026-03-07 07:52:44.544416622 +0000 UTC m=+221.453583137" watchObservedRunningTime="2026-03-07 07:52:44.545285664 +0000 UTC m=+221.454452149" Mar 07 07:52:44 crc kubenswrapper[4761]: I0307 07:52:44.547406 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-phm95" podStartSLOduration=2.466356234 podStartE2EDuration="52.547400226s" podCreationTimestamp="2026-03-07 07:51:52 +0000 UTC" firstStartedPulling="2026-03-07 07:51:53.802022669 +0000 UTC m=+170.711189144" lastFinishedPulling="2026-03-07 07:52:43.883066661 +0000 UTC m=+220.792233136" observedRunningTime="2026-03-07 07:52:44.530180478 +0000 UTC m=+221.439346953" watchObservedRunningTime="2026-03-07 07:52:44.547400226 +0000 UTC m=+221.456566711" Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.494918 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xc9s" event={"ID":"e614b274-38db-4951-8f55-a09c49011cb5","Type":"ContainerStarted","Data":"8c89fe962a9624c9fd29c25d7f382e97b0df9a7bc0c84cdb1adb1eda58732e73"} Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.779104 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.782557 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/732fe657-405c-446a-bd53-a7ac3671531c-kube-api-access\") pod \"732fe657-405c-446a-bd53-a7ac3671531c\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.782691 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/732fe657-405c-446a-bd53-a7ac3671531c-kubelet-dir\") pod \"732fe657-405c-446a-bd53-a7ac3671531c\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.782917 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/732fe657-405c-446a-bd53-a7ac3671531c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "732fe657-405c-446a-bd53-a7ac3671531c" (UID: "732fe657-405c-446a-bd53-a7ac3671531c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.790263 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/732fe657-405c-446a-bd53-a7ac3671531c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "732fe657-405c-446a-bd53-a7ac3671531c" (UID: "732fe657-405c-446a-bd53-a7ac3671531c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.883838 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/732fe657-405c-446a-bd53-a7ac3671531c-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.883882 4761 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/732fe657-405c-446a-bd53-a7ac3671531c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:46 crc kubenswrapper[4761]: I0307 07:52:46.501687 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztv97" event={"ID":"af0bdacc-ab60-43aa-adf2-86894b0896e3","Type":"ContainerStarted","Data":"3955eaf8cf0981f9e84cee368080bc45aa2a3fb2c6bacccd0b26b5e6cd9cd62b"} Mar 07 07:52:46 crc kubenswrapper[4761]: I0307 07:52:46.504384 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:46 crc kubenswrapper[4761]: I0307 07:52:46.504380 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"732fe657-405c-446a-bd53-a7ac3671531c","Type":"ContainerDied","Data":"edb7c0fdcf4076608c4f9675ddf4d8ed11c44ec842f839517e4c094437c561a0"} Mar 07 07:52:46 crc kubenswrapper[4761]: I0307 07:52:46.504484 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edb7c0fdcf4076608c4f9675ddf4d8ed11c44ec842f839517e4c094437c561a0" Mar 07 07:52:46 crc kubenswrapper[4761]: I0307 07:52:46.506412 4761 generic.go:334] "Generic (PLEG): container finished" podID="e614b274-38db-4951-8f55-a09c49011cb5" containerID="8c89fe962a9624c9fd29c25d7f382e97b0df9a7bc0c84cdb1adb1eda58732e73" exitCode=0 Mar 07 07:52:46 crc kubenswrapper[4761]: I0307 07:52:46.506450 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xc9s" event={"ID":"e614b274-38db-4951-8f55-a09c49011cb5","Type":"ContainerDied","Data":"8c89fe962a9624c9fd29c25d7f382e97b0df9a7bc0c84cdb1adb1eda58732e73"} Mar 07 07:52:47 crc kubenswrapper[4761]: I0307 07:52:47.126845 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5d2nn"] Mar 07 07:52:47 crc kubenswrapper[4761]: I0307 07:52:47.513168 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xc9s" event={"ID":"e614b274-38db-4951-8f55-a09c49011cb5","Type":"ContainerStarted","Data":"fe015ee6272e042dae30cd2808b3678be315fe850f6c47c0467d20dbada1e9fb"} Mar 07 07:52:47 crc kubenswrapper[4761]: I0307 07:52:47.514617 4761 generic.go:334] "Generic (PLEG): container finished" podID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerID="3955eaf8cf0981f9e84cee368080bc45aa2a3fb2c6bacccd0b26b5e6cd9cd62b" exitCode=0 Mar 07 07:52:47 crc kubenswrapper[4761]: I0307 07:52:47.514657 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztv97" event={"ID":"af0bdacc-ab60-43aa-adf2-86894b0896e3","Type":"ContainerDied","Data":"3955eaf8cf0981f9e84cee368080bc45aa2a3fb2c6bacccd0b26b5e6cd9cd62b"} Mar 07 07:52:47 crc kubenswrapper[4761]: I0307 07:52:47.535621 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2xc9s" podStartSLOduration=3.4711398 podStartE2EDuration="53.535583736s" podCreationTimestamp="2026-03-07 07:51:54 +0000 UTC" firstStartedPulling="2026-03-07 07:51:56.958309393 +0000 UTC m=+173.867475868" lastFinishedPulling="2026-03-07 07:52:47.022753289 +0000 UTC m=+223.931919804" observedRunningTime="2026-03-07 07:52:47.534641362 +0000 UTC m=+224.443807837" watchObservedRunningTime="2026-03-07 07:52:47.535583736 +0000 UTC m=+224.444750221" Mar 07 07:52:48 crc kubenswrapper[4761]: I0307 07:52:48.519863 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztv97" event={"ID":"af0bdacc-ab60-43aa-adf2-86894b0896e3","Type":"ContainerStarted","Data":"df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5"} Mar 07 07:52:48 crc kubenswrapper[4761]: I0307 07:52:48.537794 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ztv97" podStartSLOduration=2.194311889 podStartE2EDuration="56.537774285s" podCreationTimestamp="2026-03-07 07:51:52 +0000 UTC" firstStartedPulling="2026-03-07 07:51:53.795629962 +0000 UTC m=+170.704796437" lastFinishedPulling="2026-03-07 07:52:48.139092358 +0000 UTC m=+225.048258833" observedRunningTime="2026-03-07 07:52:48.536659037 +0000 UTC m=+225.445825512" watchObservedRunningTime="2026-03-07 07:52:48.537774285 +0000 UTC m=+225.446940760" Mar 07 07:52:50 crc kubenswrapper[4761]: I0307 07:52:50.533902 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8klgk" event={"ID":"d4b1310d-3887-4489-bbe0-5c63cd91603b","Type":"ContainerStarted","Data":"ab1aa855f8777cab454c6e836c94519969e93466e698f12a81e77477816f8084"} Mar 07 07:52:51 crc kubenswrapper[4761]: I0307 07:52:51.542325 4761 generic.go:334] "Generic (PLEG): container finished" podID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerID="ab1aa855f8777cab454c6e836c94519969e93466e698f12a81e77477816f8084" exitCode=0 Mar 07 07:52:51 crc kubenswrapper[4761]: I0307 07:52:51.542384 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8klgk" event={"ID":"d4b1310d-3887-4489-bbe0-5c63cd91603b","Type":"ContainerDied","Data":"ab1aa855f8777cab454c6e836c94519969e93466e698f12a81e77477816f8084"} Mar 07 07:52:52 crc kubenswrapper[4761]: I0307 07:52:52.548358 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8klgk" event={"ID":"d4b1310d-3887-4489-bbe0-5c63cd91603b","Type":"ContainerStarted","Data":"61c23c29de1ec9bb70bacb3e1ff503cace7efb59aaf0c96d9fe5e1fcb11ee0f9"} Mar 07 07:52:52 crc kubenswrapper[4761]: I0307 07:52:52.574040 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8klgk" podStartSLOduration=2.333216037 podStartE2EDuration="59.574015134s" podCreationTimestamp="2026-03-07 07:51:53 +0000 UTC" firstStartedPulling="2026-03-07 07:51:54.824444402 +0000 UTC m=+171.733610877" lastFinishedPulling="2026-03-07 07:52:52.065243499 +0000 UTC m=+228.974409974" observedRunningTime="2026-03-07 07:52:52.568660091 +0000 UTC m=+229.477826566" watchObservedRunningTime="2026-03-07 07:52:52.574015134 +0000 UTC m=+229.483181639" Mar 07 07:52:52 crc kubenswrapper[4761]: I0307 07:52:52.964532 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:52:52 crc kubenswrapper[4761]: I0307 07:52:52.964964 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.178168 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.178219 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.252924 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.255036 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.408174 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.408240 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.448988 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.590061 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.595399 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.597691 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.597762 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.608961 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:52:54 crc kubenswrapper[4761]: I0307 07:52:54.638691 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8klgk" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="registry-server" probeResult="failure" output=< Mar 07 07:52:54 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 07:52:54 crc kubenswrapper[4761]: > Mar 07 07:52:55 crc kubenswrapper[4761]: I0307 07:52:55.562611 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkvj9" event={"ID":"d222854b-4039-4723-bdb4-2be9768cf9f7","Type":"ContainerStarted","Data":"1697a43aae919c796911915daf9d8e114a845441e589a2086b6d21850827de34"} Mar 07 07:52:55 crc kubenswrapper[4761]: I0307 07:52:55.564583 4761 generic.go:334] "Generic (PLEG): container finished" podID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerID="45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f" exitCode=0 Mar 07 07:52:55 crc kubenswrapper[4761]: I0307 07:52:55.564623 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvcd6" event={"ID":"2cdb750e-2fd2-4e57-b474-f91f874a5e8d","Type":"ContainerDied","Data":"45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f"} Mar 07 07:52:55 crc kubenswrapper[4761]: I0307 07:52:55.639944 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:52:55 crc kubenswrapper[4761]: I0307 07:52:55.640003 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:52:55 crc kubenswrapper[4761]: I0307 07:52:55.681252 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.436424 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jzrwt"] Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.437116 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jzrwt" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="registry-server" containerID="cri-o://23595ffc23bf9b6077f3c36141fd086dc7aa8bf2ee7f31f3a4967421f2665e8f" gracePeriod=2 Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.483587 4761 csr.go:261] certificate signing request csr-frq5z is approved, waiting to be issued Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.491764 4761 csr.go:257] certificate signing request csr-frq5z is issued Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.571152 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbq9k" event={"ID":"475b44c2-ce39-4d2c-b475-8a88c37a4d22","Type":"ContainerStarted","Data":"32ae07b5efa72bd99f8ff659836fc71899a382e15730308f60ed8dcbc0efef86"} Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.572901 4761 generic.go:334] "Generic (PLEG): container finished" podID="083b3718-3e45-40ca-8adf-5f417eeda74d" containerID="654d770009a0f10f99664fd8e046dfa38b717254a33124a41073359820cb504e" exitCode=0 Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.572959 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" event={"ID":"083b3718-3e45-40ca-8adf-5f417eeda74d","Type":"ContainerDied","Data":"654d770009a0f10f99664fd8e046dfa38b717254a33124a41073359820cb504e"} Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.575941 4761 generic.go:334] "Generic (PLEG): container finished" podID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerID="23595ffc23bf9b6077f3c36141fd086dc7aa8bf2ee7f31f3a4967421f2665e8f" exitCode=0 Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.575990 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzrwt" event={"ID":"ace45696-b259-49f7-bfd9-8afe2557ac3e","Type":"ContainerDied","Data":"23595ffc23bf9b6077f3c36141fd086dc7aa8bf2ee7f31f3a4967421f2665e8f"} Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.577504 4761 generic.go:334] "Generic (PLEG): container finished" podID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerID="1697a43aae919c796911915daf9d8e114a845441e589a2086b6d21850827de34" exitCode=0 Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.577759 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkvj9" event={"ID":"d222854b-4039-4723-bdb4-2be9768cf9f7","Type":"ContainerDied","Data":"1697a43aae919c796911915daf9d8e114a845441e589a2086b6d21850827de34"} Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.579344 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvcd6" event={"ID":"2cdb750e-2fd2-4e57-b474-f91f874a5e8d","Type":"ContainerStarted","Data":"279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9"} Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.623703 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.629348 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wvcd6" podStartSLOduration=2.602809932 podStartE2EDuration="1m1.629326558s" podCreationTimestamp="2026-03-07 07:51:55 +0000 UTC" firstStartedPulling="2026-03-07 07:51:56.963164981 +0000 UTC m=+173.872331456" lastFinishedPulling="2026-03-07 07:52:55.989681607 +0000 UTC m=+232.898848082" observedRunningTime="2026-03-07 07:52:56.611261788 +0000 UTC m=+233.520428283" watchObservedRunningTime="2026-03-07 07:52:56.629326558 +0000 UTC m=+233.538493033" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.228949 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.237744 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjzgb\" (UniqueName: \"kubernetes.io/projected/ace45696-b259-49f7-bfd9-8afe2557ac3e-kube-api-access-zjzgb\") pod \"ace45696-b259-49f7-bfd9-8afe2557ac3e\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.237846 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-catalog-content\") pod \"ace45696-b259-49f7-bfd9-8afe2557ac3e\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.237873 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-utilities\") pod \"ace45696-b259-49f7-bfd9-8afe2557ac3e\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.239120 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-utilities" (OuterVolumeSpecName: "utilities") pod "ace45696-b259-49f7-bfd9-8afe2557ac3e" (UID: "ace45696-b259-49f7-bfd9-8afe2557ac3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.247922 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace45696-b259-49f7-bfd9-8afe2557ac3e-kube-api-access-zjzgb" (OuterVolumeSpecName: "kube-api-access-zjzgb") pod "ace45696-b259-49f7-bfd9-8afe2557ac3e" (UID: "ace45696-b259-49f7-bfd9-8afe2557ac3e"). InnerVolumeSpecName "kube-api-access-zjzgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.325359 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ace45696-b259-49f7-bfd9-8afe2557ac3e" (UID: "ace45696-b259-49f7-bfd9-8afe2557ac3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.339704 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.339768 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.339808 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjzgb\" (UniqueName: \"kubernetes.io/projected/ace45696-b259-49f7-bfd9-8afe2557ac3e-kube-api-access-zjzgb\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.493216 4761 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-11 12:32:39.484343121 +0000 UTC Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.493259 4761 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6700h39m41.991085923s for next certificate rotation Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.587436 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.587473 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzrwt" event={"ID":"ace45696-b259-49f7-bfd9-8afe2557ac3e","Type":"ContainerDied","Data":"0b17a3567c82ae855493975810bebedc8e189fc6631bb28e67e239c75520eef1"} Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.587601 4761 scope.go:117] "RemoveContainer" containerID="23595ffc23bf9b6077f3c36141fd086dc7aa8bf2ee7f31f3a4967421f2665e8f" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.589437 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkvj9" event={"ID":"d222854b-4039-4723-bdb4-2be9768cf9f7","Type":"ContainerStarted","Data":"b81436d222f8ddd0947ac90b7ed20c2e0d7ef2eee0b6f47ca68531cf660c1b94"} Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.590847 4761 generic.go:334] "Generic (PLEG): container finished" podID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerID="32ae07b5efa72bd99f8ff659836fc71899a382e15730308f60ed8dcbc0efef86" exitCode=0 Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.590910 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbq9k" event={"ID":"475b44c2-ce39-4d2c-b475-8a88c37a4d22","Type":"ContainerDied","Data":"32ae07b5efa72bd99f8ff659836fc71899a382e15730308f60ed8dcbc0efef86"} Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.612291 4761 scope.go:117] "RemoveContainer" containerID="a11ffafbe0f10f231010abe8a9bda1bc993b742ccaaef5cf882d03430e612a39" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.634537 4761 scope.go:117] "RemoveContainer" containerID="8d34025fd319ffd3e6850da4893b9537f4f1e29dec8fa6d5bb750de89505362c" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.639585 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wkvj9" podStartSLOduration=2.67810305 podStartE2EDuration="1m1.639561676s" podCreationTimestamp="2026-03-07 07:51:56 +0000 UTC" firstStartedPulling="2026-03-07 07:51:58.029908257 +0000 UTC m=+174.939074732" lastFinishedPulling="2026-03-07 07:52:56.991366893 +0000 UTC m=+233.900533358" observedRunningTime="2026-03-07 07:52:57.637183647 +0000 UTC m=+234.546350132" watchObservedRunningTime="2026-03-07 07:52:57.639561676 +0000 UTC m=+234.548728151" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.655158 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jzrwt"] Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.658113 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jzrwt"] Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.712042 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" path="/var/lib/kubelet/pods/ace45696-b259-49f7-bfd9-8afe2557ac3e/volumes" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.926236 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.946672 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slg9r\" (UniqueName: \"kubernetes.io/projected/083b3718-3e45-40ca-8adf-5f417eeda74d-kube-api-access-slg9r\") pod \"083b3718-3e45-40ca-8adf-5f417eeda74d\" (UID: \"083b3718-3e45-40ca-8adf-5f417eeda74d\") " Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.949949 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/083b3718-3e45-40ca-8adf-5f417eeda74d-kube-api-access-slg9r" (OuterVolumeSpecName: "kube-api-access-slg9r") pod "083b3718-3e45-40ca-8adf-5f417eeda74d" (UID: "083b3718-3e45-40ca-8adf-5f417eeda74d"). InnerVolumeSpecName "kube-api-access-slg9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:52:58 crc kubenswrapper[4761]: I0307 07:52:58.048809 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slg9r\" (UniqueName: \"kubernetes.io/projected/083b3718-3e45-40ca-8adf-5f417eeda74d-kube-api-access-slg9r\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:58 crc kubenswrapper[4761]: I0307 07:52:58.493570 4761 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-18 10:00:27.735804631 +0000 UTC Mar 07 07:52:58 crc kubenswrapper[4761]: I0307 07:52:58.493880 4761 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6146h7m29.241928114s for next certificate rotation Mar 07 07:52:58 crc kubenswrapper[4761]: I0307 07:52:58.597963 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" event={"ID":"083b3718-3e45-40ca-8adf-5f417eeda74d","Type":"ContainerDied","Data":"2ee6a6d892b38761648f5532063c1b38a1d3cfa3b95bf0d700f8031b32a71582"} Mar 07 07:52:58 crc kubenswrapper[4761]: I0307 07:52:58.598004 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ee6a6d892b38761648f5532063c1b38a1d3cfa3b95bf0d700f8031b32a71582" Mar 07 07:52:58 crc kubenswrapper[4761]: I0307 07:52:58.598061 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" Mar 07 07:53:01 crc kubenswrapper[4761]: I0307 07:53:01.619923 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbq9k" event={"ID":"475b44c2-ce39-4d2c-b475-8a88c37a4d22","Type":"ContainerStarted","Data":"deaab19835705647b9d6b2f0a10fb31b5a897e6c428ca064bd6819f7542264ae"} Mar 07 07:53:02 crc kubenswrapper[4761]: I0307 07:53:02.644473 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zbq9k" podStartSLOduration=4.960449724 podStartE2EDuration="1m7.64445469s" podCreationTimestamp="2026-03-07 07:51:55 +0000 UTC" firstStartedPulling="2026-03-07 07:51:58.106280463 +0000 UTC m=+175.015446938" lastFinishedPulling="2026-03-07 07:53:00.790285429 +0000 UTC m=+237.699451904" observedRunningTime="2026-03-07 07:53:02.642233825 +0000 UTC m=+239.551400300" watchObservedRunningTime="2026-03-07 07:53:02.64445469 +0000 UTC m=+239.553621165" Mar 07 07:53:03 crc kubenswrapper[4761]: I0307 07:53:03.659091 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:53:03 crc kubenswrapper[4761]: I0307 07:53:03.734630 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:53:04 crc kubenswrapper[4761]: I0307 07:53:04.837446 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8klgk"] Mar 07 07:53:05 crc kubenswrapper[4761]: I0307 07:53:05.637685 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:53:05 crc kubenswrapper[4761]: I0307 07:53:05.637766 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:53:05 crc kubenswrapper[4761]: I0307 07:53:05.641677 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8klgk" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="registry-server" containerID="cri-o://61c23c29de1ec9bb70bacb3e1ff503cace7efb59aaf0c96d9fe5e1fcb11ee0f9" gracePeriod=2 Mar 07 07:53:05 crc kubenswrapper[4761]: I0307 07:53:05.695375 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.181351 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.183214 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.585453 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.585509 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.623659 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.652743 4761 generic.go:334] "Generic (PLEG): container finished" podID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerID="61c23c29de1ec9bb70bacb3e1ff503cace7efb59aaf0c96d9fe5e1fcb11ee0f9" exitCode=0 Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.653083 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8klgk" event={"ID":"d4b1310d-3887-4489-bbe0-5c63cd91603b","Type":"ContainerDied","Data":"61c23c29de1ec9bb70bacb3e1ff503cace7efb59aaf0c96d9fe5e1fcb11ee0f9"} Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.694768 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.703052 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.817541 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.981896 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-catalog-content\") pod \"d4b1310d-3887-4489-bbe0-5c63cd91603b\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.981948 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg7qd\" (UniqueName: \"kubernetes.io/projected/d4b1310d-3887-4489-bbe0-5c63cd91603b-kube-api-access-wg7qd\") pod \"d4b1310d-3887-4489-bbe0-5c63cd91603b\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.981973 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-utilities\") pod \"d4b1310d-3887-4489-bbe0-5c63cd91603b\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.982774 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-utilities" (OuterVolumeSpecName: "utilities") pod "d4b1310d-3887-4489-bbe0-5c63cd91603b" (UID: "d4b1310d-3887-4489-bbe0-5c63cd91603b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.991021 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b1310d-3887-4489-bbe0-5c63cd91603b-kube-api-access-wg7qd" (OuterVolumeSpecName: "kube-api-access-wg7qd") pod "d4b1310d-3887-4489-bbe0-5c63cd91603b" (UID: "d4b1310d-3887-4489-bbe0-5c63cd91603b"). InnerVolumeSpecName "kube-api-access-wg7qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.041505 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4b1310d-3887-4489-bbe0-5c63cd91603b" (UID: "d4b1310d-3887-4489-bbe0-5c63cd91603b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.083213 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.083394 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg7qd\" (UniqueName: \"kubernetes.io/projected/d4b1310d-3887-4489-bbe0-5c63cd91603b-kube-api-access-wg7qd\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.083426 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.231999 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zbq9k" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="registry-server" probeResult="failure" output=< Mar 07 07:53:07 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 07:53:07 crc kubenswrapper[4761]: > Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.666120 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.668975 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8klgk" event={"ID":"d4b1310d-3887-4489-bbe0-5c63cd91603b","Type":"ContainerDied","Data":"afdb857524a0d1a4abf8957b56f8511a734c47a465cce76fa63859b050ae2b35"} Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.669066 4761 scope.go:117] "RemoveContainer" containerID="61c23c29de1ec9bb70bacb3e1ff503cace7efb59aaf0c96d9fe5e1fcb11ee0f9" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.694630 4761 scope.go:117] "RemoveContainer" containerID="ab1aa855f8777cab454c6e836c94519969e93466e698f12a81e77477816f8084" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.717569 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8klgk"] Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.719809 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8klgk"] Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.729079 4761 scope.go:117] "RemoveContainer" containerID="7d5d534072a9499e74df376fb3dd630c17ef0858bff6f453cc0c171a3bcd99db" Mar 07 07:53:08 crc kubenswrapper[4761]: I0307 07:53:08.042324 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvcd6"] Mar 07 07:53:08 crc kubenswrapper[4761]: I0307 07:53:08.672817 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wvcd6" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="registry-server" containerID="cri-o://279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9" gracePeriod=2 Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.147368 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.206865 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfgsw\" (UniqueName: \"kubernetes.io/projected/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-kube-api-access-bfgsw\") pod \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.206988 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-utilities\") pod \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.207013 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-catalog-content\") pod \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.208033 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-utilities" (OuterVolumeSpecName: "utilities") pod "2cdb750e-2fd2-4e57-b474-f91f874a5e8d" (UID: "2cdb750e-2fd2-4e57-b474-f91f874a5e8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.212056 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-kube-api-access-bfgsw" (OuterVolumeSpecName: "kube-api-access-bfgsw") pod "2cdb750e-2fd2-4e57-b474-f91f874a5e8d" (UID: "2cdb750e-2fd2-4e57-b474-f91f874a5e8d"). InnerVolumeSpecName "kube-api-access-bfgsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.242417 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cdb750e-2fd2-4e57-b474-f91f874a5e8d" (UID: "2cdb750e-2fd2-4e57-b474-f91f874a5e8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.307867 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfgsw\" (UniqueName: \"kubernetes.io/projected/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-kube-api-access-bfgsw\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.307905 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.307919 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.681765 4761 generic.go:334] "Generic (PLEG): container finished" podID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerID="279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9" exitCode=0 Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.681823 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvcd6" event={"ID":"2cdb750e-2fd2-4e57-b474-f91f874a5e8d","Type":"ContainerDied","Data":"279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9"} Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.681847 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.681871 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvcd6" event={"ID":"2cdb750e-2fd2-4e57-b474-f91f874a5e8d","Type":"ContainerDied","Data":"01a0a0986372d1d9f62d984187377283eba6abf44594d70aa40803e57b311878"} Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.681898 4761 scope.go:117] "RemoveContainer" containerID="279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.710932 4761 scope.go:117] "RemoveContainer" containerID="45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.723191 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" path="/var/lib/kubelet/pods/d4b1310d-3887-4489-bbe0-5c63cd91603b/volumes" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.723954 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvcd6"] Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.725784 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvcd6"] Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.746553 4761 scope.go:117] "RemoveContainer" containerID="83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.759595 4761 scope.go:117] "RemoveContainer" containerID="279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9" Mar 07 07:53:09 crc kubenswrapper[4761]: E0307 07:53:09.759999 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9\": container with ID starting with 279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9 not found: ID does not exist" containerID="279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.760030 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9"} err="failed to get container status \"279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9\": rpc error: code = NotFound desc = could not find container \"279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9\": container with ID starting with 279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9 not found: ID does not exist" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.760051 4761 scope.go:117] "RemoveContainer" containerID="45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f" Mar 07 07:53:09 crc kubenswrapper[4761]: E0307 07:53:09.760328 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f\": container with ID starting with 45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f not found: ID does not exist" containerID="45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.760349 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f"} err="failed to get container status \"45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f\": rpc error: code = NotFound desc = could not find container \"45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f\": container with ID starting with 45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f not found: ID does not exist" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.760362 4761 scope.go:117] "RemoveContainer" containerID="83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781" Mar 07 07:53:09 crc kubenswrapper[4761]: E0307 07:53:09.760888 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781\": container with ID starting with 83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781 not found: ID does not exist" containerID="83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.760916 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781"} err="failed to get container status \"83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781\": rpc error: code = NotFound desc = could not find container \"83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781\": container with ID starting with 83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781 not found: ID does not exist" Mar 07 07:53:10 crc kubenswrapper[4761]: I0307 07:53:10.439446 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wkvj9"] Mar 07 07:53:10 crc kubenswrapper[4761]: I0307 07:53:10.439659 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wkvj9" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="registry-server" containerID="cri-o://b81436d222f8ddd0947ac90b7ed20c2e0d7ef2eee0b6f47ca68531cf660c1b94" gracePeriod=2 Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.196425 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86d9ccbc48-c2429"] Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.196669 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" podUID="852ec1df-3658-47ff-9e91-98c74a6e956a" containerName="controller-manager" containerID="cri-o://6f5475ff350dfc8268fd99fe10d23cb4ff01166d466b4a60bf968bd76f426dff" gracePeriod=30 Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.320261 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866789466c-f86q8"] Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.320827 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" podUID="d5b2d79f-d9de-4fd1-b966-db43755f248e" containerName="route-controller-manager" containerID="cri-o://0cf0be69ddfc1e37d47ec4e905eb42fb011c8fd195d28d4b8e73b47761c8168b" gracePeriod=30 Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.467345 4761 patch_prober.go:28] interesting pod/route-controller-manager-866789466c-f86q8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.467397 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" podUID="d5b2d79f-d9de-4fd1-b966-db43755f248e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.698786 4761 generic.go:334] "Generic (PLEG): container finished" podID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerID="b81436d222f8ddd0947ac90b7ed20c2e0d7ef2eee0b6f47ca68531cf660c1b94" exitCode=0 Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.698863 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkvj9" event={"ID":"d222854b-4039-4723-bdb4-2be9768cf9f7","Type":"ContainerDied","Data":"b81436d222f8ddd0947ac90b7ed20c2e0d7ef2eee0b6f47ca68531cf660c1b94"} Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.701070 4761 generic.go:334] "Generic (PLEG): container finished" podID="852ec1df-3658-47ff-9e91-98c74a6e956a" containerID="6f5475ff350dfc8268fd99fe10d23cb4ff01166d466b4a60bf968bd76f426dff" exitCode=0 Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.701132 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" event={"ID":"852ec1df-3658-47ff-9e91-98c74a6e956a","Type":"ContainerDied","Data":"6f5475ff350dfc8268fd99fe10d23cb4ff01166d466b4a60bf968bd76f426dff"} Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.715468 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" path="/var/lib/kubelet/pods/2cdb750e-2fd2-4e57-b474-f91f874a5e8d/volumes" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.155968 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" podUID="21e2c5a2-e968-4844-8843-23870b388e6d" containerName="oauth-openshift" containerID="cri-o://407375ec00dc04252023445b62731194fbfb32d50af19f9b516072fe3a71402b" gracePeriod=15 Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.193106 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.243116 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-utilities\") pod \"d222854b-4039-4723-bdb4-2be9768cf9f7\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.243869 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftpss\" (UniqueName: \"kubernetes.io/projected/d222854b-4039-4723-bdb4-2be9768cf9f7-kube-api-access-ftpss\") pod \"d222854b-4039-4723-bdb4-2be9768cf9f7\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.243902 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-catalog-content\") pod \"d222854b-4039-4723-bdb4-2be9768cf9f7\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.246121 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-utilities" (OuterVolumeSpecName: "utilities") pod "d222854b-4039-4723-bdb4-2be9768cf9f7" (UID: "d222854b-4039-4723-bdb4-2be9768cf9f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.256208 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d222854b-4039-4723-bdb4-2be9768cf9f7-kube-api-access-ftpss" (OuterVolumeSpecName: "kube-api-access-ftpss") pod "d222854b-4039-4723-bdb4-2be9768cf9f7" (UID: "d222854b-4039-4723-bdb4-2be9768cf9f7"). InnerVolumeSpecName "kube-api-access-ftpss". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.345212 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.347649 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftpss\" (UniqueName: \"kubernetes.io/projected/d222854b-4039-4723-bdb4-2be9768cf9f7-kube-api-access-ftpss\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.389672 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d222854b-4039-4723-bdb4-2be9768cf9f7" (UID: "d222854b-4039-4723-bdb4-2be9768cf9f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.449092 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.712986 4761 generic.go:334] "Generic (PLEG): container finished" podID="d5b2d79f-d9de-4fd1-b966-db43755f248e" containerID="0cf0be69ddfc1e37d47ec4e905eb42fb011c8fd195d28d4b8e73b47761c8168b" exitCode=0 Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.713046 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" event={"ID":"d5b2d79f-d9de-4fd1-b966-db43755f248e","Type":"ContainerDied","Data":"0cf0be69ddfc1e37d47ec4e905eb42fb011c8fd195d28d4b8e73b47761c8168b"} Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.720151 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.720149 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkvj9" event={"ID":"d222854b-4039-4723-bdb4-2be9768cf9f7","Type":"ContainerDied","Data":"de23a09e452f2bfb79559f697b53473e1e4027e8b53b6fbf628450ba449d519f"} Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.720282 4761 scope.go:117] "RemoveContainer" containerID="b81436d222f8ddd0947ac90b7ed20c2e0d7ef2eee0b6f47ca68531cf660c1b94" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.722165 4761 generic.go:334] "Generic (PLEG): container finished" podID="21e2c5a2-e968-4844-8843-23870b388e6d" containerID="407375ec00dc04252023445b62731194fbfb32d50af19f9b516072fe3a71402b" exitCode=0 Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.722192 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" event={"ID":"21e2c5a2-e968-4844-8843-23870b388e6d","Type":"ContainerDied","Data":"407375ec00dc04252023445b62731194fbfb32d50af19f9b516072fe3a71402b"} Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.766883 4761 scope.go:117] "RemoveContainer" containerID="1697a43aae919c796911915daf9d8e114a845441e589a2086b6d21850827de34" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.777691 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.799160 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wkvj9"] Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.807536 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wkvj9"] Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.829916 4761 scope.go:117] "RemoveContainer" containerID="3150372c2d99fad85617856cce969ff24fbe8e06307e33cd0c0a4e391026e50b" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.882363 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.959652 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-client-ca\") pod \"852ec1df-3658-47ff-9e91-98c74a6e956a\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.959708 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/852ec1df-3658-47ff-9e91-98c74a6e956a-serving-cert\") pod \"852ec1df-3658-47ff-9e91-98c74a6e956a\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.959797 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-proxy-ca-bundles\") pod \"852ec1df-3658-47ff-9e91-98c74a6e956a\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.959828 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmbqk\" (UniqueName: \"kubernetes.io/projected/852ec1df-3658-47ff-9e91-98c74a6e956a-kube-api-access-dmbqk\") pod \"852ec1df-3658-47ff-9e91-98c74a6e956a\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.959848 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-config\") pod \"852ec1df-3658-47ff-9e91-98c74a6e956a\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.960867 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "852ec1df-3658-47ff-9e91-98c74a6e956a" (UID: "852ec1df-3658-47ff-9e91-98c74a6e956a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.961074 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-config" (OuterVolumeSpecName: "config") pod "852ec1df-3658-47ff-9e91-98c74a6e956a" (UID: "852ec1df-3658-47ff-9e91-98c74a6e956a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.961237 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-client-ca" (OuterVolumeSpecName: "client-ca") pod "852ec1df-3658-47ff-9e91-98c74a6e956a" (UID: "852ec1df-3658-47ff-9e91-98c74a6e956a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.963296 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852ec1df-3658-47ff-9e91-98c74a6e956a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "852ec1df-3658-47ff-9e91-98c74a6e956a" (UID: "852ec1df-3658-47ff-9e91-98c74a6e956a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.964918 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/852ec1df-3658-47ff-9e91-98c74a6e956a-kube-api-access-dmbqk" (OuterVolumeSpecName: "kube-api-access-dmbqk") pod "852ec1df-3658-47ff-9e91-98c74a6e956a" (UID: "852ec1df-3658-47ff-9e91-98c74a6e956a"). InnerVolumeSpecName "kube-api-access-dmbqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075363 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-client-ca\") pod \"d5b2d79f-d9de-4fd1-b966-db43755f248e\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075448 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-config\") pod \"d5b2d79f-d9de-4fd1-b966-db43755f248e\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075574 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd2zx\" (UniqueName: \"kubernetes.io/projected/d5b2d79f-d9de-4fd1-b966-db43755f248e-kube-api-access-vd2zx\") pod \"d5b2d79f-d9de-4fd1-b966-db43755f248e\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075657 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b2d79f-d9de-4fd1-b966-db43755f248e-serving-cert\") pod \"d5b2d79f-d9de-4fd1-b966-db43755f248e\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075928 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075940 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/852ec1df-3658-47ff-9e91-98c74a6e956a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075950 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075961 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmbqk\" (UniqueName: \"kubernetes.io/projected/852ec1df-3658-47ff-9e91-98c74a6e956a-kube-api-access-dmbqk\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075971 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.076113 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-client-ca" (OuterVolumeSpecName: "client-ca") pod "d5b2d79f-d9de-4fd1-b966-db43755f248e" (UID: "d5b2d79f-d9de-4fd1-b966-db43755f248e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.076202 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-config" (OuterVolumeSpecName: "config") pod "d5b2d79f-d9de-4fd1-b966-db43755f248e" (UID: "d5b2d79f-d9de-4fd1-b966-db43755f248e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.081807 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b2d79f-d9de-4fd1-b966-db43755f248e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d5b2d79f-d9de-4fd1-b966-db43755f248e" (UID: "d5b2d79f-d9de-4fd1-b966-db43755f248e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.090323 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b2d79f-d9de-4fd1-b966-db43755f248e-kube-api-access-vd2zx" (OuterVolumeSpecName: "kube-api-access-vd2zx") pod "d5b2d79f-d9de-4fd1-b966-db43755f248e" (UID: "d5b2d79f-d9de-4fd1-b966-db43755f248e"). InnerVolumeSpecName "kube-api-access-vd2zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.157264 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.181053 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-service-ca\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.181674 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b2d79f-d9de-4fd1-b966-db43755f248e-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.181697 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.181709 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.181745 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd2zx\" (UniqueName: \"kubernetes.io/projected/d5b2d79f-d9de-4fd1-b966-db43755f248e-kube-api-access-vd2zx\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.182031 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283045 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-audit-policies\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283094 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-serving-cert\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283123 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-provider-selection\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283147 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-router-certs\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283175 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-session\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283200 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzrv9\" (UniqueName: \"kubernetes.io/projected/21e2c5a2-e968-4844-8843-23870b388e6d-kube-api-access-bzrv9\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283223 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-idp-0-file-data\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283241 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-trusted-ca-bundle\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283268 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21e2c5a2-e968-4844-8843-23870b388e6d-audit-dir\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283283 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-login\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283308 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-error\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283329 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-ocp-branding-template\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283344 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-cliconfig\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283506 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283838 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e2c5a2-e968-4844-8843-23870b388e6d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.284298 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.284592 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.284940 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.290597 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.291318 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.292571 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.292624 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e2c5a2-e968-4844-8843-23870b388e6d-kube-api-access-bzrv9" (OuterVolumeSpecName: "kube-api-access-bzrv9") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "kube-api-access-bzrv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.293604 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.294154 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.294327 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.294443 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.295986 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384665 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384707 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384742 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzrv9\" (UniqueName: \"kubernetes.io/projected/21e2c5a2-e968-4844-8843-23870b388e6d-kube-api-access-bzrv9\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384756 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384771 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384786 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384801 4761 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21e2c5a2-e968-4844-8843-23870b388e6d-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384819 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384832 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384846 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384859 4761 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384871 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384885 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.717419 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" path="/var/lib/kubelet/pods/d222854b-4039-4723-bdb4-2be9768cf9f7/volumes" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.731660 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" event={"ID":"21e2c5a2-e968-4844-8843-23870b388e6d","Type":"ContainerDied","Data":"85aa246b580d1c61a9b7d0d898416a8e0cd2e35170d5426b941bc8973d1755da"} Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.731798 4761 scope.go:117] "RemoveContainer" containerID="407375ec00dc04252023445b62731194fbfb32d50af19f9b516072fe3a71402b" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.732121 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.735614 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" event={"ID":"852ec1df-3658-47ff-9e91-98c74a6e956a","Type":"ContainerDied","Data":"ec9118a124662d15a6dd684ddc4478555ab7ac5701a1e5221e4368887332b26d"} Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.735819 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.739418 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" event={"ID":"d5b2d79f-d9de-4fd1-b966-db43755f248e","Type":"ContainerDied","Data":"6cd1a429b5bda47a9e6cd7a814da696021eef68e05cbc96a866c3193b6be8254"} Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.739542 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.760472 4761 scope.go:117] "RemoveContainer" containerID="6f5475ff350dfc8268fd99fe10d23cb4ff01166d466b4a60bf968bd76f426dff" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.768425 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.768499 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.793936 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866789466c-f86q8"] Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.794229 4761 scope.go:117] "RemoveContainer" containerID="0cf0be69ddfc1e37d47ec4e905eb42fb011c8fd195d28d4b8e73b47761c8168b" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.803112 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866789466c-f86q8"] Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.816074 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5d2nn"] Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.820912 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5d2nn"] Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.828305 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86d9ccbc48-c2429"] Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.833073 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86d9ccbc48-c2429"] Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438121 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb"] Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438418 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438434 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438447 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438454 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438466 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="732fe657-405c-446a-bd53-a7ac3671531c" containerName="pruner" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438475 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="732fe657-405c-446a-bd53-a7ac3671531c" containerName="pruner" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438482 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438488 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438497 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438503 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438513 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438520 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438526 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438533 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438543 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438549 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438559 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e2c5a2-e968-4844-8843-23870b388e6d" containerName="oauth-openshift" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438568 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e2c5a2-e968-4844-8843-23870b388e6d" containerName="oauth-openshift" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438574 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b2d79f-d9de-4fd1-b966-db43755f248e" containerName="route-controller-manager" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438579 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b2d79f-d9de-4fd1-b966-db43755f248e" containerName="route-controller-manager" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438586 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083b3718-3e45-40ca-8adf-5f417eeda74d" containerName="oc" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438592 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="083b3718-3e45-40ca-8adf-5f417eeda74d" containerName="oc" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438601 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438607 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438615 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438621 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438629 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438635 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438643 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438649 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438655 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438661 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438669 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438676 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438685 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852ec1df-3658-47ff-9e91-98c74a6e956a" containerName="controller-manager" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438691 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="852ec1df-3658-47ff-9e91-98c74a6e956a" containerName="controller-manager" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438800 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438810 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438821 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="852ec1df-3658-47ff-9e91-98c74a6e956a" containerName="controller-manager" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438829 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438836 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="732fe657-405c-446a-bd53-a7ac3671531c" containerName="pruner" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438843 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438851 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="083b3718-3e45-40ca-8adf-5f417eeda74d" containerName="oc" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438857 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b2d79f-d9de-4fd1-b966-db43755f248e" containerName="route-controller-manager" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438865 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e2c5a2-e968-4844-8843-23870b388e6d" containerName="oauth-openshift" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438872 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.439322 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.443365 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fdb49984-plf57"] Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.444355 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.444868 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.445246 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.445489 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.445679 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.446463 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.446680 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.449791 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.449882 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.453829 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.454270 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.454599 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.455364 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.459984 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb"] Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.465746 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.482243 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fdb49984-plf57"] Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504372 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-config\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504475 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-config\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504517 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8a90600-3887-4769-a6be-c49c04603b77-serving-cert\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504553 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdhbn\" (UniqueName: \"kubernetes.io/projected/a8a90600-3887-4769-a6be-c49c04603b77-kube-api-access-vdhbn\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504579 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-client-ca\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504612 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-proxy-ca-bundles\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504635 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n65cj\" (UniqueName: \"kubernetes.io/projected/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-kube-api-access-n65cj\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504656 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-serving-cert\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504682 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-client-ca\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606163 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdhbn\" (UniqueName: \"kubernetes.io/projected/a8a90600-3887-4769-a6be-c49c04603b77-kube-api-access-vdhbn\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606244 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-client-ca\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606300 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-proxy-ca-bundles\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606344 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n65cj\" (UniqueName: \"kubernetes.io/projected/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-kube-api-access-n65cj\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606380 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-serving-cert\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606429 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-client-ca\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606483 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-config\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606599 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-config\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606660 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8a90600-3887-4769-a6be-c49c04603b77-serving-cert\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.607770 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-client-ca\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.607861 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-proxy-ca-bundles\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.610053 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-client-ca\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.610494 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-config\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.610937 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-config\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.614659 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-serving-cert\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.614694 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8a90600-3887-4769-a6be-c49c04603b77-serving-cert\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.637208 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdhbn\" (UniqueName: \"kubernetes.io/projected/a8a90600-3887-4769-a6be-c49c04603b77-kube-api-access-vdhbn\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.637406 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n65cj\" (UniqueName: \"kubernetes.io/projected/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-kube-api-access-n65cj\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.767956 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.793821 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.312615 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb"] Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.318077 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fdb49984-plf57"] Mar 07 07:53:15 crc kubenswrapper[4761]: W0307 07:53:15.320890 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabb53652_b9ee_41d0_9152_4b71fcdb1e7e.slice/crio-f9e04264fdbf32961bf51705d8c383d7888571a0733f5db7cc525d9e3bdaddcf WatchSource:0}: Error finding container f9e04264fdbf32961bf51705d8c383d7888571a0733f5db7cc525d9e3bdaddcf: Status 404 returned error can't find the container with id f9e04264fdbf32961bf51705d8c383d7888571a0733f5db7cc525d9e3bdaddcf Mar 07 07:53:15 crc kubenswrapper[4761]: W0307 07:53:15.323026 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8a90600_3887_4769_a6be_c49c04603b77.slice/crio-7aafe68333d5ac096b99d20cb1e129db74f2f470591377afefd9cf0185d6caa1 WatchSource:0}: Error finding container 7aafe68333d5ac096b99d20cb1e129db74f2f470591377afefd9cf0185d6caa1: Status 404 returned error can't find the container with id 7aafe68333d5ac096b99d20cb1e129db74f2f470591377afefd9cf0185d6caa1 Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.445986 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-679bdd659-ctglc"] Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.447543 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.462250 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.462532 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.463064 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.463279 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.463432 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.463626 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.464982 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.465212 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.465436 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.465635 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.465892 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.466431 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.475576 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-679bdd659-ctglc"] Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.496122 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.499352 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.500772 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521655 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521703 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltm7q\" (UniqueName: \"kubernetes.io/projected/e91a422d-2255-4769-8a0e-6eb6f8b93eed-kube-api-access-ltm7q\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521779 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521810 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-router-certs\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521837 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521856 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-service-ca\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521879 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521904 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-session\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.522140 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-login\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.522246 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-error\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.522277 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-audit-policies\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.522299 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.522328 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e91a422d-2255-4769-8a0e-6eb6f8b93eed-audit-dir\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.522349 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623500 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623561 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-service-ca\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623594 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623625 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-session\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623666 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-login\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623691 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-error\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623735 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-audit-policies\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623758 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623784 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e91a422d-2255-4769-8a0e-6eb6f8b93eed-audit-dir\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623805 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623886 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e91a422d-2255-4769-8a0e-6eb6f8b93eed-audit-dir\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623949 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.624082 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltm7q\" (UniqueName: \"kubernetes.io/projected/e91a422d-2255-4769-8a0e-6eb6f8b93eed-kube-api-access-ltm7q\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.624128 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.624550 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-audit-policies\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.624553 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.624556 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-service-ca\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.624664 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-router-certs\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.624837 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.629086 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.629127 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-session\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.629775 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-router-certs\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.629775 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.630038 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-login\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.630783 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.634705 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.635469 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-error\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.644178 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltm7q\" (UniqueName: \"kubernetes.io/projected/e91a422d-2255-4769-8a0e-6eb6f8b93eed-kube-api-access-ltm7q\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.711400 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e2c5a2-e968-4844-8843-23870b388e6d" path="/var/lib/kubelet/pods/21e2c5a2-e968-4844-8843-23870b388e6d/volumes" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.712471 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="852ec1df-3658-47ff-9e91-98c74a6e956a" path="/var/lib/kubelet/pods/852ec1df-3658-47ff-9e91-98c74a6e956a/volumes" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.713202 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b2d79f-d9de-4fd1-b966-db43755f248e" path="/var/lib/kubelet/pods/d5b2d79f-d9de-4fd1-b966-db43755f248e/volumes" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.758669 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" event={"ID":"abb53652-b9ee-41d0-9152-4b71fcdb1e7e","Type":"ContainerStarted","Data":"19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb"} Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.758733 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" event={"ID":"abb53652-b9ee-41d0-9152-4b71fcdb1e7e","Type":"ContainerStarted","Data":"f9e04264fdbf32961bf51705d8c383d7888571a0733f5db7cc525d9e3bdaddcf"} Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.759449 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.760399 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" event={"ID":"a8a90600-3887-4769-a6be-c49c04603b77","Type":"ContainerStarted","Data":"4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca"} Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.760521 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" event={"ID":"a8a90600-3887-4769-a6be-c49c04603b77","Type":"ContainerStarted","Data":"7aafe68333d5ac096b99d20cb1e129db74f2f470591377afefd9cf0185d6caa1"} Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.761098 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.780129 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" podStartSLOduration=4.7801056 podStartE2EDuration="4.7801056s" podCreationTimestamp="2026-03-07 07:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:53:15.77691139 +0000 UTC m=+252.686077865" watchObservedRunningTime="2026-03-07 07:53:15.7801056 +0000 UTC m=+252.689272075" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.781020 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.804185 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.805615 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" podStartSLOduration=4.805591674 podStartE2EDuration="4.805591674s" podCreationTimestamp="2026-03-07 07:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:53:15.800326253 +0000 UTC m=+252.709492738" watchObservedRunningTime="2026-03-07 07:53:15.805591674 +0000 UTC m=+252.714758149" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.993628 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:16 crc kubenswrapper[4761]: I0307 07:53:16.256080 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-679bdd659-ctglc"] Mar 07 07:53:16 crc kubenswrapper[4761]: I0307 07:53:16.261662 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:53:16 crc kubenswrapper[4761]: W0307 07:53:16.267375 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode91a422d_2255_4769_8a0e_6eb6f8b93eed.slice/crio-d101ffb6e11b0e39146777e5c673edd7ea91cf454d6be8bb2248038f9da8fa80 WatchSource:0}: Error finding container d101ffb6e11b0e39146777e5c673edd7ea91cf454d6be8bb2248038f9da8fa80: Status 404 returned error can't find the container with id d101ffb6e11b0e39146777e5c673edd7ea91cf454d6be8bb2248038f9da8fa80 Mar 07 07:53:16 crc kubenswrapper[4761]: I0307 07:53:16.312477 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:53:16 crc kubenswrapper[4761]: I0307 07:53:16.767734 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" event={"ID":"e91a422d-2255-4769-8a0e-6eb6f8b93eed","Type":"ContainerStarted","Data":"5fcc2c691603f4627e78e1eaff03f53c949e513e4a726f449c6bcc6c90c6849a"} Mar 07 07:53:16 crc kubenswrapper[4761]: I0307 07:53:16.768095 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" event={"ID":"e91a422d-2255-4769-8a0e-6eb6f8b93eed","Type":"ContainerStarted","Data":"d101ffb6e11b0e39146777e5c673edd7ea91cf454d6be8bb2248038f9da8fa80"} Mar 07 07:53:16 crc kubenswrapper[4761]: I0307 07:53:16.790561 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podStartSLOduration=29.790540564 podStartE2EDuration="29.790540564s" podCreationTimestamp="2026-03-07 07:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:53:16.788432281 +0000 UTC m=+253.697598767" watchObservedRunningTime="2026-03-07 07:53:16.790540564 +0000 UTC m=+253.699707039" Mar 07 07:53:17 crc kubenswrapper[4761]: I0307 07:53:17.775594 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:17 crc kubenswrapper[4761]: I0307 07:53:17.781629 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:18 crc kubenswrapper[4761]: I0307 07:53:18.460103 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.860846 4761 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.861649 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.861987 4761 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.862545 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19" gracePeriod=15 Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.862575 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78" gracePeriod=15 Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.862753 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b" gracePeriod=15 Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.862786 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315" gracePeriod=15 Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.862809 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9" gracePeriod=15 Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.894337 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.894551 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.894594 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.894630 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.894826 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904249 4761 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904523 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904549 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904565 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904576 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904591 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904601 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904610 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904617 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904629 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904638 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904648 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904656 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904669 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904677 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904696 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904704 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904902 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904921 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904933 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904944 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904955 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904965 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904973 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904983 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.905093 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.905103 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.905215 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.905323 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.905332 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.935620 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996282 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996641 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996682 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996709 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996823 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996842 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996876 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996913 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.997009 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.997045 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.997075 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.997103 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.997140 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.098550 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.098588 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.098646 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.098726 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.098709 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.098805 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.228950 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: W0307 07:53:21.257381 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-06d4e5b7cb74d790ea1ee3277a084219d145c65ea7448052bb2c553fbc9661bf WatchSource:0}: Error finding container 06d4e5b7cb74d790ea1ee3277a084219d145c65ea7448052bb2c553fbc9661bf: Status 404 returned error can't find the container with id 06d4e5b7cb74d790ea1ee3277a084219d145c65ea7448052bb2c553fbc9661bf Mar 07 07:53:21 crc kubenswrapper[4761]: E0307 07:53:21.261127 4761 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189a7fdf8212b620 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:53:21.2604104 +0000 UTC m=+258.169576875,LastTimestamp:2026-03-07 07:53:21.2604104 +0000 UTC m=+258.169576875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.809663 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.811643 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.812626 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78" exitCode=0 Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.812671 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b" exitCode=0 Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.812696 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315" exitCode=0 Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.812758 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9" exitCode=2 Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.812857 4761 scope.go:117] "RemoveContainer" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.815565 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c355e92130d0e99a3a13893d7dfea9a751cb2d75ba4a5de59dc8ae3c788e30c1"} Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.815648 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"06d4e5b7cb74d790ea1ee3277a084219d145c65ea7448052bb2c553fbc9661bf"} Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.816463 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.817996 4761 generic.go:334] "Generic (PLEG): container finished" podID="88f12d9b-cb82-4690-be2c-35d91899a86a" containerID="f8da5910371e18b2edd6886052f3fe7116d73f39cd035e121ecb5be718578e8f" exitCode=0 Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.818057 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"88f12d9b-cb82-4690-be2c-35d91899a86a","Type":"ContainerDied","Data":"f8da5910371e18b2edd6886052f3fe7116d73f39cd035e121ecb5be718578e8f"} Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.818993 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.819533 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:22 crc kubenswrapper[4761]: E0307 07:53:22.729324 4761 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189a7fdf8212b620 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:53:21.2604104 +0000 UTC m=+258.169576875,LastTimestamp:2026-03-07 07:53:21.2604104 +0000 UTC m=+258.169576875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:53:22 crc kubenswrapper[4761]: I0307 07:53:22.831419 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.228932 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.230145 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.231941 4761 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.232307 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.232566 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.289944 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.290521 4761 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.291097 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.291397 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.335206 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.335497 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.335339 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.335554 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.335932 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88f12d9b-cb82-4690-be2c-35d91899a86a-kube-api-access\") pod \"88f12d9b-cb82-4690-be2c-35d91899a86a\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.336349 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.336421 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.337078 4761 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.337234 4761 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.337361 4761 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.341371 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88f12d9b-cb82-4690-be2c-35d91899a86a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "88f12d9b-cb82-4690-be2c-35d91899a86a" (UID: "88f12d9b-cb82-4690-be2c-35d91899a86a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.438595 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-var-lock\") pod \"88f12d9b-cb82-4690-be2c-35d91899a86a\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.438688 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-kubelet-dir\") pod \"88f12d9b-cb82-4690-be2c-35d91899a86a\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.438772 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-var-lock" (OuterVolumeSpecName: "var-lock") pod "88f12d9b-cb82-4690-be2c-35d91899a86a" (UID: "88f12d9b-cb82-4690-be2c-35d91899a86a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.438790 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "88f12d9b-cb82-4690-be2c-35d91899a86a" (UID: "88f12d9b-cb82-4690-be2c-35d91899a86a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.439275 4761 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.439303 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88f12d9b-cb82-4690-be2c-35d91899a86a-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.439324 4761 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.712754 4761 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.713549 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.714406 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.718086 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.843786 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.844484 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19" exitCode=0 Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.844563 4761 scope.go:117] "RemoveContainer" containerID="563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.844586 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.845338 4761 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.845828 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.846237 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.846568 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"88f12d9b-cb82-4690-be2c-35d91899a86a","Type":"ContainerDied","Data":"a5cef952adcb249cb54e99547d65e9b16d3d92a191a324f0f82c885682808b37"} Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.846591 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5cef952adcb249cb54e99547d65e9b16d3d92a191a324f0f82c885682808b37" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.846651 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.848397 4761 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.848638 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.848894 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.853971 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.854367 4761 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.854703 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.865793 4761 scope.go:117] "RemoveContainer" containerID="f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.885239 4761 scope.go:117] "RemoveContainer" containerID="978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.900954 4761 scope.go:117] "RemoveContainer" containerID="2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.922924 4761 scope.go:117] "RemoveContainer" containerID="ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.948075 4761 scope.go:117] "RemoveContainer" containerID="dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.979132 4761 scope.go:117] "RemoveContainer" containerID="563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78" Mar 07 07:53:23 crc kubenswrapper[4761]: E0307 07:53:23.979620 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\": container with ID starting with 563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78 not found: ID does not exist" containerID="563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.979670 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78"} err="failed to get container status \"563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\": rpc error: code = NotFound desc = could not find container \"563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\": container with ID starting with 563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78 not found: ID does not exist" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.979698 4761 scope.go:117] "RemoveContainer" containerID="f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b" Mar 07 07:53:23 crc kubenswrapper[4761]: E0307 07:53:23.980200 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\": container with ID starting with f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b not found: ID does not exist" containerID="f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.980218 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b"} err="failed to get container status \"f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\": rpc error: code = NotFound desc = could not find container \"f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\": container with ID starting with f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b not found: ID does not exist" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.980230 4761 scope.go:117] "RemoveContainer" containerID="978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315" Mar 07 07:53:23 crc kubenswrapper[4761]: E0307 07:53:23.980433 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\": container with ID starting with 978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315 not found: ID does not exist" containerID="978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.980457 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315"} err="failed to get container status \"978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\": rpc error: code = NotFound desc = could not find container \"978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\": container with ID starting with 978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315 not found: ID does not exist" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.980476 4761 scope.go:117] "RemoveContainer" containerID="2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9" Mar 07 07:53:23 crc kubenswrapper[4761]: E0307 07:53:23.981166 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\": container with ID starting with 2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9 not found: ID does not exist" containerID="2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.981343 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9"} err="failed to get container status \"2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\": rpc error: code = NotFound desc = could not find container \"2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\": container with ID starting with 2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9 not found: ID does not exist" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.981395 4761 scope.go:117] "RemoveContainer" containerID="ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19" Mar 07 07:53:23 crc kubenswrapper[4761]: E0307 07:53:23.981927 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\": container with ID starting with ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19 not found: ID does not exist" containerID="ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.981957 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19"} err="failed to get container status \"ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\": rpc error: code = NotFound desc = could not find container \"ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\": container with ID starting with ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19 not found: ID does not exist" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.981971 4761 scope.go:117] "RemoveContainer" containerID="dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702" Mar 07 07:53:23 crc kubenswrapper[4761]: E0307 07:53:23.982290 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\": container with ID starting with dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702 not found: ID does not exist" containerID="dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.982317 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702"} err="failed to get container status \"dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\": rpc error: code = NotFound desc = could not find container \"dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\": container with ID starting with dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702 not found: ID does not exist" Mar 07 07:53:24 crc kubenswrapper[4761]: E0307 07:53:24.618712 4761 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:24 crc kubenswrapper[4761]: E0307 07:53:24.619092 4761 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:24 crc kubenswrapper[4761]: E0307 07:53:24.619411 4761 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:24 crc kubenswrapper[4761]: E0307 07:53:24.619748 4761 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:24 crc kubenswrapper[4761]: E0307 07:53:24.620055 4761 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:24 crc kubenswrapper[4761]: I0307 07:53:24.620115 4761 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 07 07:53:24 crc kubenswrapper[4761]: E0307 07:53:24.620637 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="200ms" Mar 07 07:53:24 crc kubenswrapper[4761]: E0307 07:53:24.821840 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="400ms" Mar 07 07:53:25 crc kubenswrapper[4761]: E0307 07:53:25.223464 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="800ms" Mar 07 07:53:26 crc kubenswrapper[4761]: E0307 07:53:26.023978 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="1.6s" Mar 07 07:53:27 crc kubenswrapper[4761]: E0307 07:53:27.625613 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="3.2s" Mar 07 07:53:30 crc kubenswrapper[4761]: E0307 07:53:30.826908 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="6.4s" Mar 07 07:53:32 crc kubenswrapper[4761]: I0307 07:53:32.705144 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:32 crc kubenswrapper[4761]: I0307 07:53:32.707433 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:32 crc kubenswrapper[4761]: I0307 07:53:32.708608 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:32 crc kubenswrapper[4761]: I0307 07:53:32.730630 4761 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:32 crc kubenswrapper[4761]: I0307 07:53:32.730686 4761 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:32 crc kubenswrapper[4761]: E0307 07:53:32.730980 4761 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189a7fdf8212b620 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:53:21.2604104 +0000 UTC m=+258.169576875,LastTimestamp:2026-03-07 07:53:21.2604104 +0000 UTC m=+258.169576875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:53:32 crc kubenswrapper[4761]: E0307 07:53:32.731426 4761 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:32 crc kubenswrapper[4761]: I0307 07:53:32.732253 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:32 crc kubenswrapper[4761]: W0307 07:53:32.761378 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-ce8b2a43581d5a5997cc916adc6b9fa68a7e585e29b2d6205ea2c6a1d881e8a6 WatchSource:0}: Error finding container ce8b2a43581d5a5997cc916adc6b9fa68a7e585e29b2d6205ea2c6a1d881e8a6: Status 404 returned error can't find the container with id ce8b2a43581d5a5997cc916adc6b9fa68a7e585e29b2d6205ea2c6a1d881e8a6 Mar 07 07:53:32 crc kubenswrapper[4761]: I0307 07:53:32.920515 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ce8b2a43581d5a5997cc916adc6b9fa68a7e585e29b2d6205ea2c6a1d881e8a6"} Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.715349 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.716545 4761 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.717248 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.929206 4761 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="87ba216f8fb02d3597129e78152233de86d31cb645327e2fa95c7ec20a047e1d" exitCode=0 Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.929251 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"87ba216f8fb02d3597129e78152233de86d31cb645327e2fa95c7ec20a047e1d"} Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.929935 4761 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.929984 4761 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.930192 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:33 crc kubenswrapper[4761]: E0307 07:53:33.930579 4761 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.930641 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.931197 4761 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:34 crc kubenswrapper[4761]: I0307 07:53:34.939294 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9db3af18ad2199ffdf8d3fa3a681106c31d506a520bd2d59b3d60586aa9d6669"} Mar 07 07:53:34 crc kubenswrapper[4761]: I0307 07:53:34.939575 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d43ad56423e46941ba846e6a62962a2ee0fb493fba631302e6d75734506931d6"} Mar 07 07:53:34 crc kubenswrapper[4761]: I0307 07:53:34.939590 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8112c1775d9c3058100708306d518854705232878fa4b1af41e4fc94462f9db0"} Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.945448 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9e4d566ed655f0e954bd54e6c46984551470b4ab9eb7960ac18dec65c11bc8f5"} Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.945767 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b8e3eb9cc83925698f2aadd77c9d71e4d8c8a77ec00d30fe7518086350890f5f"} Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.945921 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.946029 4761 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.946059 4761 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.947679 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.948623 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.948661 4761 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942" exitCode=1 Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.948686 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942"} Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.949189 4761 scope.go:117] "RemoveContainer" containerID="c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942" Mar 07 07:53:36 crc kubenswrapper[4761]: I0307 07:53:36.956595 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 07:53:36 crc kubenswrapper[4761]: I0307 07:53:36.959216 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 07 07:53:36 crc kubenswrapper[4761]: I0307 07:53:36.959283 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2e6600470846f9c3bf1c986582e80c3146e6e566fe0df1156004be04da8a6964"} Mar 07 07:53:37 crc kubenswrapper[4761]: I0307 07:53:37.733082 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:37 crc kubenswrapper[4761]: I0307 07:53:37.733439 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:37 crc kubenswrapper[4761]: I0307 07:53:37.741509 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:37 crc kubenswrapper[4761]: I0307 07:53:37.950611 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:53:40 crc kubenswrapper[4761]: I0307 07:53:40.033179 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:53:40 crc kubenswrapper[4761]: I0307 07:53:40.033798 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 07 07:53:40 crc kubenswrapper[4761]: I0307 07:53:40.033858 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 07 07:53:40 crc kubenswrapper[4761]: I0307 07:53:40.959765 4761 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:40 crc kubenswrapper[4761]: I0307 07:53:40.993168 4761 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:40 crc kubenswrapper[4761]: I0307 07:53:40.993198 4761 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:41 crc kubenswrapper[4761]: I0307 07:53:41.000094 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:41 crc kubenswrapper[4761]: I0307 07:53:41.013387 4761 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1e4b2dae-b939-4697-949e-d6f05ebc2004" Mar 07 07:53:41 crc kubenswrapper[4761]: I0307 07:53:41.998318 4761 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:41 crc kubenswrapper[4761]: I0307 07:53:41.998691 4761 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:42 crc kubenswrapper[4761]: I0307 07:53:42.000970 4761 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1e4b2dae-b939-4697-949e-d6f05ebc2004" Mar 07 07:53:43 crc kubenswrapper[4761]: I0307 07:53:43.768937 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:53:43 crc kubenswrapper[4761]: I0307 07:53:43.769205 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:53:50 crc kubenswrapper[4761]: I0307 07:53:50.039439 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:53:50 crc kubenswrapper[4761]: I0307 07:53:50.049328 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:53:50 crc kubenswrapper[4761]: I0307 07:53:50.086363 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 07 07:53:51 crc kubenswrapper[4761]: I0307 07:53:51.957659 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 07 07:53:51 crc kubenswrapper[4761]: I0307 07:53:51.972019 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 07 07:53:51 crc kubenswrapper[4761]: I0307 07:53:51.999196 4761 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.155835 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.333350 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.341329 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.396612 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.546855 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.654985 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.741845 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.800295 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.959099 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 07 07:53:53 crc kubenswrapper[4761]: I0307 07:53:53.176843 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 07 07:53:53 crc kubenswrapper[4761]: I0307 07:53:53.345702 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 07 07:53:53 crc kubenswrapper[4761]: I0307 07:53:53.407972 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 07:53:53 crc kubenswrapper[4761]: I0307 07:53:53.443572 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 07 07:53:53 crc kubenswrapper[4761]: I0307 07:53:53.637317 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 07 07:53:53 crc kubenswrapper[4761]: I0307 07:53:53.798662 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 07 07:53:53 crc kubenswrapper[4761]: I0307 07:53:53.931973 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.107276 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.176128 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.366002 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.400310 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.423077 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.523977 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.561656 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.618439 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.659094 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.684668 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.686855 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.721767 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.731672 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.788678 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.819403 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.867993 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.875869 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.919526 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.958146 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.114370 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.271905 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.343850 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.360218 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.385844 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.394792 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.421487 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.422860 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.585538 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.655868 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.674804 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.737968 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.822819 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.825724 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.830705 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.890731 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.917399 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.941132 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.956712 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.046380 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.055620 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.112271 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.116213 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.133753 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.253059 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.259173 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.269304 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.382951 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.476585 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.517122 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.545948 4761 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.567591 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.829855 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.832687 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.835528 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.856521 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.887126 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.902216 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.903313 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.049211 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.109356 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.167890 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.258765 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.264610 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.371500 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.373432 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.400004 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.426014 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.430905 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.465377 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.478300 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.480039 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.622999 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.630116 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.640615 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.825869 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.866504 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.942128 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.942360 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.987275 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.106895 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.126163 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.154478 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.242128 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.252386 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.259584 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.371773 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.379006 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.523196 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.602626 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.674058 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.782130 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.826957 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.893110 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.917285 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.002017 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.038490 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.056199 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.061110 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.098701 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.320692 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.386860 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.413661 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.482707 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.565458 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.658802 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.778737 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.807310 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.835525 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.941366 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.036021 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.104598 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.124619 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.201006 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.243651 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.311594 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.323144 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.437320 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.451181 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.630088 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.635548 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.647907 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.655148 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.671225 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.773271 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.912646 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.925094 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.936421 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.944799 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.045048 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.049438 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.139061 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.146816 4761 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.152329 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.190652 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.212653 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.280124 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.312096 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.456471 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.472837 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.570853 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.587770 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.616268 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.619833 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.656872 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.684295 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.794751 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.802587 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.819360 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.839424 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.913240 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.920946 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.062427 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.143289 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.223767 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.328844 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.346756 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.377049 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.404470 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.406079 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.428746 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.481838 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.495896 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.572221 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.607243 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.630352 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.647616 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.710451 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.727495 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.808311 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.919291 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.938753 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.999021 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.081505 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.118380 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.118941 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.137448 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.161549 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.245534 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.284226 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.428755 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.491771 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.606310 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.619980 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.720828 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.777429 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.782842 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.829762 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.846152 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.860308 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.989262 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.043034 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.048572 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.090293 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.189486 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.259105 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.261106 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.331180 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.380517 4761 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.381348 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.381328822 podStartE2EDuration="44.381328822s" podCreationTimestamp="2026-03-07 07:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:53:40.969740301 +0000 UTC m=+277.878906776" watchObservedRunningTime="2026-03-07 07:54:04.381328822 +0000 UTC m=+301.290495307" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.386284 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.386342 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.390018 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.408215 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.408196502 podStartE2EDuration="24.408196502s" podCreationTimestamp="2026-03-07 07:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:54:04.40624313 +0000 UTC m=+301.315409605" watchObservedRunningTime="2026-03-07 07:54:04.408196502 +0000 UTC m=+301.317362987" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.420592 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.480862 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.489376 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.789534 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.790613 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.790790 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.882490 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.946175 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.958014 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.964204 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.999647 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.011150 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.014758 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.106427 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.124996 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.157567 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.271466 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.321273 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.523687 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.526788 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.675299 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.699988 4761 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.761117 4761 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.045283 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.067627 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.307102 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.334932 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547834-vbflv"] Mar 07 07:54:06 crc kubenswrapper[4761]: E0307 07:54:06.335158 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" containerName="installer" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.335169 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" containerName="installer" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.335252 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" containerName="installer" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.335576 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547834-vbflv" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.337477 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.337565 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.337664 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.340646 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547834-vbflv"] Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.370503 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.384166 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cf95\" (UniqueName: \"kubernetes.io/projected/44149f32-4111-4706-977e-411d6011bb02-kube-api-access-2cf95\") pod \"auto-csr-approver-29547834-vbflv\" (UID: \"44149f32-4111-4706-977e-411d6011bb02\") " pod="openshift-infra/auto-csr-approver-29547834-vbflv" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.412310 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.485710 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cf95\" (UniqueName: \"kubernetes.io/projected/44149f32-4111-4706-977e-411d6011bb02-kube-api-access-2cf95\") pod \"auto-csr-approver-29547834-vbflv\" (UID: \"44149f32-4111-4706-977e-411d6011bb02\") " pod="openshift-infra/auto-csr-approver-29547834-vbflv" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.515296 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cf95\" (UniqueName: \"kubernetes.io/projected/44149f32-4111-4706-977e-411d6011bb02-kube-api-access-2cf95\") pod \"auto-csr-approver-29547834-vbflv\" (UID: \"44149f32-4111-4706-977e-411d6011bb02\") " pod="openshift-infra/auto-csr-approver-29547834-vbflv" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.649173 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547834-vbflv" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.696164 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.783642 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.866995 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 07 07:54:07 crc kubenswrapper[4761]: I0307 07:54:07.073785 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547834-vbflv"] Mar 07 07:54:07 crc kubenswrapper[4761]: W0307 07:54:07.082699 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44149f32_4111_4706_977e_411d6011bb02.slice/crio-504a0922589729acec8682b4ead76c3217a9698d741279fd74bca36300a6c06d WatchSource:0}: Error finding container 504a0922589729acec8682b4ead76c3217a9698d741279fd74bca36300a6c06d: Status 404 returned error can't find the container with id 504a0922589729acec8682b4ead76c3217a9698d741279fd74bca36300a6c06d Mar 07 07:54:07 crc kubenswrapper[4761]: I0307 07:54:07.177085 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547834-vbflv" event={"ID":"44149f32-4111-4706-977e-411d6011bb02","Type":"ContainerStarted","Data":"504a0922589729acec8682b4ead76c3217a9698d741279fd74bca36300a6c06d"} Mar 07 07:54:08 crc kubenswrapper[4761]: I0307 07:54:08.368170 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 07 07:54:08 crc kubenswrapper[4761]: I0307 07:54:08.516263 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 07 07:54:09 crc kubenswrapper[4761]: I0307 07:54:09.101855 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 07 07:54:09 crc kubenswrapper[4761]: I0307 07:54:09.193400 4761 generic.go:334] "Generic (PLEG): container finished" podID="44149f32-4111-4706-977e-411d6011bb02" containerID="829cbe3ab09ee538f0ac491499b1b8d9f6872046415226f166160c3c514103af" exitCode=0 Mar 07 07:54:09 crc kubenswrapper[4761]: I0307 07:54:09.193453 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547834-vbflv" event={"ID":"44149f32-4111-4706-977e-411d6011bb02","Type":"ContainerDied","Data":"829cbe3ab09ee538f0ac491499b1b8d9f6872046415226f166160c3c514103af"} Mar 07 07:54:10 crc kubenswrapper[4761]: I0307 07:54:10.545237 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547834-vbflv" Mar 07 07:54:10 crc kubenswrapper[4761]: I0307 07:54:10.742871 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cf95\" (UniqueName: \"kubernetes.io/projected/44149f32-4111-4706-977e-411d6011bb02-kube-api-access-2cf95\") pod \"44149f32-4111-4706-977e-411d6011bb02\" (UID: \"44149f32-4111-4706-977e-411d6011bb02\") " Mar 07 07:54:10 crc kubenswrapper[4761]: I0307 07:54:10.750953 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44149f32-4111-4706-977e-411d6011bb02-kube-api-access-2cf95" (OuterVolumeSpecName: "kube-api-access-2cf95") pod "44149f32-4111-4706-977e-411d6011bb02" (UID: "44149f32-4111-4706-977e-411d6011bb02"). InnerVolumeSpecName "kube-api-access-2cf95". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:54:10 crc kubenswrapper[4761]: I0307 07:54:10.844895 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cf95\" (UniqueName: \"kubernetes.io/projected/44149f32-4111-4706-977e-411d6011bb02-kube-api-access-2cf95\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.208702 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547834-vbflv" event={"ID":"44149f32-4111-4706-977e-411d6011bb02","Type":"ContainerDied","Data":"504a0922589729acec8682b4ead76c3217a9698d741279fd74bca36300a6c06d"} Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.209119 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="504a0922589729acec8682b4ead76c3217a9698d741279fd74bca36300a6c06d" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.208865 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547834-vbflv" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.233556 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fdb49984-plf57"] Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.233842 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" podUID="abb53652-b9ee-41d0-9152-4b71fcdb1e7e" containerName="controller-manager" containerID="cri-o://19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb" gracePeriod=30 Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.327375 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb"] Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.327667 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" podUID="a8a90600-3887-4769-a6be-c49c04603b77" containerName="route-controller-manager" containerID="cri-o://4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca" gracePeriod=30 Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.734249 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.738251 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.765495 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-proxy-ca-bundles\") pod \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.766470 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "abb53652-b9ee-41d0-9152-4b71fcdb1e7e" (UID: "abb53652-b9ee-41d0-9152-4b71fcdb1e7e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.866960 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-config\") pod \"a8a90600-3887-4769-a6be-c49c04603b77\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867054 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8a90600-3887-4769-a6be-c49c04603b77-serving-cert\") pod \"a8a90600-3887-4769-a6be-c49c04603b77\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867102 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdhbn\" (UniqueName: \"kubernetes.io/projected/a8a90600-3887-4769-a6be-c49c04603b77-kube-api-access-vdhbn\") pod \"a8a90600-3887-4769-a6be-c49c04603b77\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867138 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n65cj\" (UniqueName: \"kubernetes.io/projected/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-kube-api-access-n65cj\") pod \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867183 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-serving-cert\") pod \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867214 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-client-ca\") pod \"a8a90600-3887-4769-a6be-c49c04603b77\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867272 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-client-ca\") pod \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867316 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-config\") pod \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867972 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.868206 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-config" (OuterVolumeSpecName: "config") pod "a8a90600-3887-4769-a6be-c49c04603b77" (UID: "a8a90600-3887-4769-a6be-c49c04603b77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.868607 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-client-ca" (OuterVolumeSpecName: "client-ca") pod "abb53652-b9ee-41d0-9152-4b71fcdb1e7e" (UID: "abb53652-b9ee-41d0-9152-4b71fcdb1e7e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.868662 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-config" (OuterVolumeSpecName: "config") pod "abb53652-b9ee-41d0-9152-4b71fcdb1e7e" (UID: "abb53652-b9ee-41d0-9152-4b71fcdb1e7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.869013 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-client-ca" (OuterVolumeSpecName: "client-ca") pod "a8a90600-3887-4769-a6be-c49c04603b77" (UID: "a8a90600-3887-4769-a6be-c49c04603b77"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.871916 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8a90600-3887-4769-a6be-c49c04603b77-kube-api-access-vdhbn" (OuterVolumeSpecName: "kube-api-access-vdhbn") pod "a8a90600-3887-4769-a6be-c49c04603b77" (UID: "a8a90600-3887-4769-a6be-c49c04603b77"). InnerVolumeSpecName "kube-api-access-vdhbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.872889 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-kube-api-access-n65cj" (OuterVolumeSpecName: "kube-api-access-n65cj") pod "abb53652-b9ee-41d0-9152-4b71fcdb1e7e" (UID: "abb53652-b9ee-41d0-9152-4b71fcdb1e7e"). InnerVolumeSpecName "kube-api-access-n65cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.873496 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8a90600-3887-4769-a6be-c49c04603b77-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a8a90600-3887-4769-a6be-c49c04603b77" (UID: "a8a90600-3887-4769-a6be-c49c04603b77"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.881203 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "abb53652-b9ee-41d0-9152-4b71fcdb1e7e" (UID: "abb53652-b9ee-41d0-9152-4b71fcdb1e7e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969530 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdhbn\" (UniqueName: \"kubernetes.io/projected/a8a90600-3887-4769-a6be-c49c04603b77-kube-api-access-vdhbn\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969590 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n65cj\" (UniqueName: \"kubernetes.io/projected/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-kube-api-access-n65cj\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969613 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969633 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969651 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969668 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969684 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969700 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8a90600-3887-4769-a6be-c49c04603b77-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.216615 4761 generic.go:334] "Generic (PLEG): container finished" podID="abb53652-b9ee-41d0-9152-4b71fcdb1e7e" containerID="19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb" exitCode=0 Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.216671 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.216692 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" event={"ID":"abb53652-b9ee-41d0-9152-4b71fcdb1e7e","Type":"ContainerDied","Data":"19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb"} Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.216753 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" event={"ID":"abb53652-b9ee-41d0-9152-4b71fcdb1e7e","Type":"ContainerDied","Data":"f9e04264fdbf32961bf51705d8c383d7888571a0733f5db7cc525d9e3bdaddcf"} Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.216775 4761 scope.go:117] "RemoveContainer" containerID="19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.225592 4761 generic.go:334] "Generic (PLEG): container finished" podID="a8a90600-3887-4769-a6be-c49c04603b77" containerID="4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca" exitCode=0 Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.225637 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" event={"ID":"a8a90600-3887-4769-a6be-c49c04603b77","Type":"ContainerDied","Data":"4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca"} Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.225673 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" event={"ID":"a8a90600-3887-4769-a6be-c49c04603b77","Type":"ContainerDied","Data":"7aafe68333d5ac096b99d20cb1e129db74f2f470591377afefd9cf0185d6caa1"} Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.225680 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.243079 4761 scope.go:117] "RemoveContainer" containerID="19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb" Mar 07 07:54:12 crc kubenswrapper[4761]: E0307 07:54:12.245100 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb\": container with ID starting with 19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb not found: ID does not exist" containerID="19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.245151 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb"} err="failed to get container status \"19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb\": rpc error: code = NotFound desc = could not find container \"19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb\": container with ID starting with 19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb not found: ID does not exist" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.245176 4761 scope.go:117] "RemoveContainer" containerID="4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.246002 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fdb49984-plf57"] Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.251138 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6fdb49984-plf57"] Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.269598 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb"] Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.274067 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb"] Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.277837 4761 scope.go:117] "RemoveContainer" containerID="4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca" Mar 07 07:54:12 crc kubenswrapper[4761]: E0307 07:54:12.278534 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca\": container with ID starting with 4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca not found: ID does not exist" containerID="4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.278599 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca"} err="failed to get container status \"4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca\": rpc error: code = NotFound desc = could not find container \"4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca\": container with ID starting with 4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca not found: ID does not exist" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.488540 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt"] Mar 07 07:54:12 crc kubenswrapper[4761]: E0307 07:54:12.489322 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb53652-b9ee-41d0-9152-4b71fcdb1e7e" containerName="controller-manager" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.489358 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb53652-b9ee-41d0-9152-4b71fcdb1e7e" containerName="controller-manager" Mar 07 07:54:12 crc kubenswrapper[4761]: E0307 07:54:12.489395 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44149f32-4111-4706-977e-411d6011bb02" containerName="oc" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.489411 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="44149f32-4111-4706-977e-411d6011bb02" containerName="oc" Mar 07 07:54:12 crc kubenswrapper[4761]: E0307 07:54:12.489440 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8a90600-3887-4769-a6be-c49c04603b77" containerName="route-controller-manager" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.489454 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8a90600-3887-4769-a6be-c49c04603b77" containerName="route-controller-manager" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.489629 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8a90600-3887-4769-a6be-c49c04603b77" containerName="route-controller-manager" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.489651 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="abb53652-b9ee-41d0-9152-4b71fcdb1e7e" containerName="controller-manager" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.489675 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="44149f32-4111-4706-977e-411d6011bb02" containerName="oc" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.490373 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.492865 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.493327 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.493563 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.494770 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.494977 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.496523 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.500599 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598"] Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.501847 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.505914 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.506032 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.508507 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.508678 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.509063 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt"] Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.509850 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.510140 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.511650 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.521547 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598"] Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.678593 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxzlw\" (UniqueName: \"kubernetes.io/projected/503244c0-cbcb-4296-95eb-069f504136d0-kube-api-access-dxzlw\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.678662 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-client-ca\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.678775 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-config\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.678808 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/503244c0-cbcb-4296-95eb-069f504136d0-serving-cert\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.679013 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-config\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.679148 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk9gj\" (UniqueName: \"kubernetes.io/projected/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-kube-api-access-wk9gj\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.679227 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-serving-cert\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.679252 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-proxy-ca-bundles\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.679344 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-client-ca\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780359 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-serving-cert\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780418 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-proxy-ca-bundles\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780491 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-client-ca\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780534 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxzlw\" (UniqueName: \"kubernetes.io/projected/503244c0-cbcb-4296-95eb-069f504136d0-kube-api-access-dxzlw\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780587 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-client-ca\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780646 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-config\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780678 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/503244c0-cbcb-4296-95eb-069f504136d0-serving-cert\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780754 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-config\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780808 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk9gj\" (UniqueName: \"kubernetes.io/projected/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-kube-api-access-wk9gj\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.782067 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-client-ca\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.782360 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-config\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.783791 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-config\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.785235 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-client-ca\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.785786 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/503244c0-cbcb-4296-95eb-069f504136d0-serving-cert\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.785892 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-proxy-ca-bundles\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.794384 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-serving-cert\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.806304 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk9gj\" (UniqueName: \"kubernetes.io/projected/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-kube-api-access-wk9gj\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.816670 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxzlw\" (UniqueName: \"kubernetes.io/projected/503244c0-cbcb-4296-95eb-069f504136d0-kube-api-access-dxzlw\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.835325 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.852238 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.086587 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt"] Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.237021 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" event={"ID":"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba","Type":"ContainerStarted","Data":"f6e290facd4b3b3c4f20d9bbb280b86ca1939b5cd61af61ed2c4f6c5c47b2a8e"} Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.237408 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" event={"ID":"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba","Type":"ContainerStarted","Data":"ebb35fd1af10390eb1375705380951ab17fb0ad7b563cf68a3541f54340f7d47"} Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.237437 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.239323 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-gt7mt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.239358 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" podUID="a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.258448 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" podStartSLOduration=2.258427637 podStartE2EDuration="2.258427637s" podCreationTimestamp="2026-03-07 07:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:54:13.254744108 +0000 UTC m=+310.163910613" watchObservedRunningTime="2026-03-07 07:54:13.258427637 +0000 UTC m=+310.167594112" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.341517 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598"] Mar 07 07:54:13 crc kubenswrapper[4761]: W0307 07:54:13.345566 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod503244c0_cbcb_4296_95eb_069f504136d0.slice/crio-09aade76b8dc3ddc09a43380ed1936418a49c51f38e0b8b26efffd387b3df77b WatchSource:0}: Error finding container 09aade76b8dc3ddc09a43380ed1936418a49c51f38e0b8b26efffd387b3df77b: Status 404 returned error can't find the container with id 09aade76b8dc3ddc09a43380ed1936418a49c51f38e0b8b26efffd387b3df77b Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.713453 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8a90600-3887-4769-a6be-c49c04603b77" path="/var/lib/kubelet/pods/a8a90600-3887-4769-a6be-c49c04603b77/volumes" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.714146 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abb53652-b9ee-41d0-9152-4b71fcdb1e7e" path="/var/lib/kubelet/pods/abb53652-b9ee-41d0-9152-4b71fcdb1e7e/volumes" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.768493 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.768547 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.768603 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.769182 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.769234 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897" gracePeriod=600 Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.247814 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" event={"ID":"503244c0-cbcb-4296-95eb-069f504136d0","Type":"ContainerStarted","Data":"afcd6724b5ad4651726fcef7e612cbcbf103587a6e92ea2395d462390b10d53e"} Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.248103 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.248115 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" event={"ID":"503244c0-cbcb-4296-95eb-069f504136d0","Type":"ContainerStarted","Data":"09aade76b8dc3ddc09a43380ed1936418a49c51f38e0b8b26efffd387b3df77b"} Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.252361 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897" exitCode=0 Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.252443 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897"} Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.252494 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"99999bd284e69fd9faa6103a00d03a466d499b9bac79905f9b3132ce0f479790"} Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.253179 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.256397 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.266405 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" podStartSLOduration=3.26638765 podStartE2EDuration="3.26638765s" podCreationTimestamp="2026-03-07 07:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:54:14.26379209 +0000 UTC m=+311.172958585" watchObservedRunningTime="2026-03-07 07:54:14.26638765 +0000 UTC m=+311.175554125" Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.848778 4761 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.849189 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c355e92130d0e99a3a13893d7dfea9a751cb2d75ba4a5de59dc8ae3c788e30c1" gracePeriod=5 Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.303038 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.303899 4761 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c355e92130d0e99a3a13893d7dfea9a751cb2d75ba4a5de59dc8ae3c788e30c1" exitCode=137 Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.501594 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.501699 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685229 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685360 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685495 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685591 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685667 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685743 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685784 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685810 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685891 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.686215 4761 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.686250 4761 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.686323 4761 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.686347 4761 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.699703 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.787989 4761 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.310536 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.310881 4761 scope.go:117] "RemoveContainer" containerID="c355e92130d0e99a3a13893d7dfea9a751cb2d75ba4a5de59dc8ae3c788e30c1" Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.310939 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.714358 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.714598 4761 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.725565 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.725595 4761 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="050a70ba-5d61-44a5-a0e4-4ad025921951" Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.728436 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.728472 4761 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="050a70ba-5d61-44a5-a0e4-4ad025921951" Mar 07 07:54:22 crc kubenswrapper[4761]: I0307 07:54:22.319768 4761 generic.go:334] "Generic (PLEG): container finished" podID="69f8f788-a780-4cf1-9ef7-397428d61593" containerID="2f2df3f61605050ff823689a3ab84881edb02d6979ac541c6c9979f7a1145713" exitCode=0 Mar 07 07:54:22 crc kubenswrapper[4761]: I0307 07:54:22.319809 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" event={"ID":"69f8f788-a780-4cf1-9ef7-397428d61593","Type":"ContainerDied","Data":"2f2df3f61605050ff823689a3ab84881edb02d6979ac541c6c9979f7a1145713"} Mar 07 07:54:22 crc kubenswrapper[4761]: I0307 07:54:22.320678 4761 scope.go:117] "RemoveContainer" containerID="2f2df3f61605050ff823689a3ab84881edb02d6979ac541c6c9979f7a1145713" Mar 07 07:54:23 crc kubenswrapper[4761]: I0307 07:54:23.329561 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" event={"ID":"69f8f788-a780-4cf1-9ef7-397428d61593","Type":"ContainerStarted","Data":"01fa76c6bfa57c63d1718130ceb65ce0ca778bd9ca727c6c0c1fc1223a507d8b"} Mar 07 07:54:23 crc kubenswrapper[4761]: I0307 07:54:23.330456 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:54:23 crc kubenswrapper[4761]: I0307 07:54:23.331997 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.222472 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt"] Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.223393 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" podUID="a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" containerName="controller-manager" containerID="cri-o://f6e290facd4b3b3c4f20d9bbb280b86ca1939b5cd61af61ed2c4f6c5c47b2a8e" gracePeriod=30 Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.255456 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598"] Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.255755 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" podUID="503244c0-cbcb-4296-95eb-069f504136d0" containerName="route-controller-manager" containerID="cri-o://afcd6724b5ad4651726fcef7e612cbcbf103587a6e92ea2395d462390b10d53e" gracePeriod=30 Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.384520 4761 generic.go:334] "Generic (PLEG): container finished" podID="a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" containerID="f6e290facd4b3b3c4f20d9bbb280b86ca1939b5cd61af61ed2c4f6c5c47b2a8e" exitCode=0 Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.384575 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" event={"ID":"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba","Type":"ContainerDied","Data":"f6e290facd4b3b3c4f20d9bbb280b86ca1939b5cd61af61ed2c4f6c5c47b2a8e"} Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.386275 4761 generic.go:334] "Generic (PLEG): container finished" podID="503244c0-cbcb-4296-95eb-069f504136d0" containerID="afcd6724b5ad4651726fcef7e612cbcbf103587a6e92ea2395d462390b10d53e" exitCode=0 Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.386302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" event={"ID":"503244c0-cbcb-4296-95eb-069f504136d0","Type":"ContainerDied","Data":"afcd6724b5ad4651726fcef7e612cbcbf103587a6e92ea2395d462390b10d53e"} Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.777752 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.860747 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.955328 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/503244c0-cbcb-4296-95eb-069f504136d0-serving-cert\") pod \"503244c0-cbcb-4296-95eb-069f504136d0\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.955710 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk9gj\" (UniqueName: \"kubernetes.io/projected/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-kube-api-access-wk9gj\") pod \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.955791 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-serving-cert\") pod \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.955835 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxzlw\" (UniqueName: \"kubernetes.io/projected/503244c0-cbcb-4296-95eb-069f504136d0-kube-api-access-dxzlw\") pod \"503244c0-cbcb-4296-95eb-069f504136d0\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.955884 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-proxy-ca-bundles\") pod \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.955962 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-config\") pod \"503244c0-cbcb-4296-95eb-069f504136d0\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.956000 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-client-ca\") pod \"503244c0-cbcb-4296-95eb-069f504136d0\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.956063 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-config\") pod \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.957241 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-client-ca" (OuterVolumeSpecName: "client-ca") pod "503244c0-cbcb-4296-95eb-069f504136d0" (UID: "503244c0-cbcb-4296-95eb-069f504136d0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.958253 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" (UID: "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.958352 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-config" (OuterVolumeSpecName: "config") pod "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" (UID: "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.958776 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-config" (OuterVolumeSpecName: "config") pod "503244c0-cbcb-4296-95eb-069f504136d0" (UID: "503244c0-cbcb-4296-95eb-069f504136d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.964074 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/503244c0-cbcb-4296-95eb-069f504136d0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "503244c0-cbcb-4296-95eb-069f504136d0" (UID: "503244c0-cbcb-4296-95eb-069f504136d0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.964115 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/503244c0-cbcb-4296-95eb-069f504136d0-kube-api-access-dxzlw" (OuterVolumeSpecName: "kube-api-access-dxzlw") pod "503244c0-cbcb-4296-95eb-069f504136d0" (UID: "503244c0-cbcb-4296-95eb-069f504136d0"). InnerVolumeSpecName "kube-api-access-dxzlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.964096 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-kube-api-access-wk9gj" (OuterVolumeSpecName: "kube-api-access-wk9gj") pod "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" (UID: "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba"). InnerVolumeSpecName "kube-api-access-wk9gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.964143 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" (UID: "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057216 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-client-ca\") pod \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057534 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/503244c0-cbcb-4296-95eb-069f504136d0-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057566 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk9gj\" (UniqueName: \"kubernetes.io/projected/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-kube-api-access-wk9gj\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057588 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057606 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxzlw\" (UniqueName: \"kubernetes.io/projected/503244c0-cbcb-4296-95eb-069f504136d0-kube-api-access-dxzlw\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057623 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057640 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057657 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057673 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.058051 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" (UID: "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.158595 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.404786 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" event={"ID":"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba","Type":"ContainerDied","Data":"ebb35fd1af10390eb1375705380951ab17fb0ad7b563cf68a3541f54340f7d47"} Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.404873 4761 scope.go:117] "RemoveContainer" containerID="f6e290facd4b3b3c4f20d9bbb280b86ca1939b5cd61af61ed2c4f6c5c47b2a8e" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.405096 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.408558 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" event={"ID":"503244c0-cbcb-4296-95eb-069f504136d0","Type":"ContainerDied","Data":"09aade76b8dc3ddc09a43380ed1936418a49c51f38e0b8b26efffd387b3df77b"} Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.408794 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.430085 4761 scope.go:117] "RemoveContainer" containerID="afcd6724b5ad4651726fcef7e612cbcbf103587a6e92ea2395d462390b10d53e" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.454305 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt"] Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.464192 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt"] Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.475144 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598"] Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.479575 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598"] Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.485804 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d"] Mar 07 07:54:32 crc kubenswrapper[4761]: E0307 07:54:32.488765 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" containerName="controller-manager" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.488811 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" containerName="controller-manager" Mar 07 07:54:32 crc kubenswrapper[4761]: E0307 07:54:32.488836 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="503244c0-cbcb-4296-95eb-069f504136d0" containerName="route-controller-manager" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.488849 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="503244c0-cbcb-4296-95eb-069f504136d0" containerName="route-controller-manager" Mar 07 07:54:32 crc kubenswrapper[4761]: E0307 07:54:32.488879 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.488892 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.489086 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="503244c0-cbcb-4296-95eb-069f504136d0" containerName="route-controller-manager" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.489107 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" containerName="controller-manager" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.489140 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.489851 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.490153 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-568f6f76cb-d44fh"] Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.490976 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.491939 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.493863 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.494235 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.494557 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.494779 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.494979 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.495205 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.496930 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.499028 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.499820 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.501883 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.501888 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.503446 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d"] Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.505912 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.506246 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-568f6f76cb-d44fh"] Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.665942 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a047653-bc3d-49f6-a43e-79e45b8f8403-serving-cert\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666024 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v85lh\" (UniqueName: \"kubernetes.io/projected/ec6e0bc0-596b-4842-8381-4336ae8f54f4-kube-api-access-v85lh\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666199 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6e0bc0-596b-4842-8381-4336ae8f54f4-serving-cert\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666263 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l4mv\" (UniqueName: \"kubernetes.io/projected/6a047653-bc3d-49f6-a43e-79e45b8f8403-kube-api-access-8l4mv\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666301 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-client-ca\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666402 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-proxy-ca-bundles\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666493 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-client-ca\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666538 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-config\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666581 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-config\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.767496 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a047653-bc3d-49f6-a43e-79e45b8f8403-serving-cert\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.767560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v85lh\" (UniqueName: \"kubernetes.io/projected/ec6e0bc0-596b-4842-8381-4336ae8f54f4-kube-api-access-v85lh\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.767604 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6e0bc0-596b-4842-8381-4336ae8f54f4-serving-cert\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.767663 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l4mv\" (UniqueName: \"kubernetes.io/projected/6a047653-bc3d-49f6-a43e-79e45b8f8403-kube-api-access-8l4mv\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.767703 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-client-ca\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.767784 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-proxy-ca-bundles\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.769177 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-client-ca\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.769374 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-client-ca\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.769939 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-proxy-ca-bundles\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.770804 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-client-ca\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.771009 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-config\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.772745 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-config\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.772844 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-config\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.774896 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-config\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.780205 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6e0bc0-596b-4842-8381-4336ae8f54f4-serving-cert\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.782700 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a047653-bc3d-49f6-a43e-79e45b8f8403-serving-cert\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.800554 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v85lh\" (UniqueName: \"kubernetes.io/projected/ec6e0bc0-596b-4842-8381-4336ae8f54f4-kube-api-access-v85lh\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.801542 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l4mv\" (UniqueName: \"kubernetes.io/projected/6a047653-bc3d-49f6-a43e-79e45b8f8403-kube-api-access-8l4mv\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.824853 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.835553 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.121805 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-568f6f76cb-d44fh"] Mar 07 07:54:33 crc kubenswrapper[4761]: W0307 07:54:33.126537 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec6e0bc0_596b_4842_8381_4336ae8f54f4.slice/crio-12ae557f59fd48d5f93c954aae140bd486695f52cdce2207e6fa0df160c5c050 WatchSource:0}: Error finding container 12ae557f59fd48d5f93c954aae140bd486695f52cdce2207e6fa0df160c5c050: Status 404 returned error can't find the container with id 12ae557f59fd48d5f93c954aae140bd486695f52cdce2207e6fa0df160c5c050 Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.421023 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d"] Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.422077 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" event={"ID":"ec6e0bc0-596b-4842-8381-4336ae8f54f4","Type":"ContainerStarted","Data":"995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3"} Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.422107 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" event={"ID":"ec6e0bc0-596b-4842-8381-4336ae8f54f4","Type":"ContainerStarted","Data":"12ae557f59fd48d5f93c954aae140bd486695f52cdce2207e6fa0df160c5c050"} Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.422986 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.431370 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.447620 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" podStartSLOduration=2.447601932 podStartE2EDuration="2.447601932s" podCreationTimestamp="2026-03-07 07:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:54:33.445347171 +0000 UTC m=+330.354513716" watchObservedRunningTime="2026-03-07 07:54:33.447601932 +0000 UTC m=+330.356768407" Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.712968 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="503244c0-cbcb-4296-95eb-069f504136d0" path="/var/lib/kubelet/pods/503244c0-cbcb-4296-95eb-069f504136d0/volumes" Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.713820 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" path="/var/lib/kubelet/pods/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba/volumes" Mar 07 07:54:34 crc kubenswrapper[4761]: I0307 07:54:34.441923 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" event={"ID":"6a047653-bc3d-49f6-a43e-79e45b8f8403","Type":"ContainerStarted","Data":"26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27"} Mar 07 07:54:34 crc kubenswrapper[4761]: I0307 07:54:34.441973 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" event={"ID":"6a047653-bc3d-49f6-a43e-79e45b8f8403","Type":"ContainerStarted","Data":"d8bee143cea706c193650a53220b7c5ab5a798cd3c7e7e2f866ba5225ea372f8"} Mar 07 07:54:34 crc kubenswrapper[4761]: I0307 07:54:34.442531 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:34 crc kubenswrapper[4761]: I0307 07:54:34.450858 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:34 crc kubenswrapper[4761]: I0307 07:54:34.461700 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" podStartSLOduration=3.461679727 podStartE2EDuration="3.461679727s" podCreationTimestamp="2026-03-07 07:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:54:34.459015326 +0000 UTC m=+331.368181821" watchObservedRunningTime="2026-03-07 07:54:34.461679727 +0000 UTC m=+331.370846212" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.226897 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-568f6f76cb-d44fh"] Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.227902 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" podUID="ec6e0bc0-596b-4842-8381-4336ae8f54f4" containerName="controller-manager" containerID="cri-o://995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3" gracePeriod=30 Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.254040 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d"] Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.254771 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" podUID="6a047653-bc3d-49f6-a43e-79e45b8f8403" containerName="route-controller-manager" containerID="cri-o://26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27" gracePeriod=30 Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.619062 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.653315 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-config\") pod \"6a047653-bc3d-49f6-a43e-79e45b8f8403\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.653440 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-client-ca\") pod \"6a047653-bc3d-49f6-a43e-79e45b8f8403\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.653490 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l4mv\" (UniqueName: \"kubernetes.io/projected/6a047653-bc3d-49f6-a43e-79e45b8f8403-kube-api-access-8l4mv\") pod \"6a047653-bc3d-49f6-a43e-79e45b8f8403\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.654794 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a047653-bc3d-49f6-a43e-79e45b8f8403-serving-cert\") pod \"6a047653-bc3d-49f6-a43e-79e45b8f8403\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.654980 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-config" (OuterVolumeSpecName: "config") pod "6a047653-bc3d-49f6-a43e-79e45b8f8403" (UID: "6a047653-bc3d-49f6-a43e-79e45b8f8403"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.654963 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-client-ca" (OuterVolumeSpecName: "client-ca") pod "6a047653-bc3d-49f6-a43e-79e45b8f8403" (UID: "6a047653-bc3d-49f6-a43e-79e45b8f8403"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.655353 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.655381 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.662901 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a047653-bc3d-49f6-a43e-79e45b8f8403-kube-api-access-8l4mv" (OuterVolumeSpecName: "kube-api-access-8l4mv") pod "6a047653-bc3d-49f6-a43e-79e45b8f8403" (UID: "6a047653-bc3d-49f6-a43e-79e45b8f8403"). InnerVolumeSpecName "kube-api-access-8l4mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.664045 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a047653-bc3d-49f6-a43e-79e45b8f8403-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6a047653-bc3d-49f6-a43e-79e45b8f8403" (UID: "6a047653-bc3d-49f6-a43e-79e45b8f8403"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.742004 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.757763 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a047653-bc3d-49f6-a43e-79e45b8f8403-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.757792 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l4mv\" (UniqueName: \"kubernetes.io/projected/6a047653-bc3d-49f6-a43e-79e45b8f8403-kube-api-access-8l4mv\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.821345 4761 generic.go:334] "Generic (PLEG): container finished" podID="6a047653-bc3d-49f6-a43e-79e45b8f8403" containerID="26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27" exitCode=0 Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.821403 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" event={"ID":"6a047653-bc3d-49f6-a43e-79e45b8f8403","Type":"ContainerDied","Data":"26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27"} Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.821439 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.821467 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" event={"ID":"6a047653-bc3d-49f6-a43e-79e45b8f8403","Type":"ContainerDied","Data":"d8bee143cea706c193650a53220b7c5ab5a798cd3c7e7e2f866ba5225ea372f8"} Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.821493 4761 scope.go:117] "RemoveContainer" containerID="26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.823679 4761 generic.go:334] "Generic (PLEG): container finished" podID="ec6e0bc0-596b-4842-8381-4336ae8f54f4" containerID="995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3" exitCode=0 Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.823860 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" event={"ID":"ec6e0bc0-596b-4842-8381-4336ae8f54f4","Type":"ContainerDied","Data":"995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3"} Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.823922 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" event={"ID":"ec6e0bc0-596b-4842-8381-4336ae8f54f4","Type":"ContainerDied","Data":"12ae557f59fd48d5f93c954aae140bd486695f52cdce2207e6fa0df160c5c050"} Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.823872 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.845479 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d"] Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.845580 4761 scope.go:117] "RemoveContainer" containerID="26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27" Mar 07 07:55:31 crc kubenswrapper[4761]: E0307 07:55:31.846352 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27\": container with ID starting with 26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27 not found: ID does not exist" containerID="26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.846411 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27"} err="failed to get container status \"26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27\": rpc error: code = NotFound desc = could not find container \"26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27\": container with ID starting with 26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27 not found: ID does not exist" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.846457 4761 scope.go:117] "RemoveContainer" containerID="995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.848913 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d"] Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.858665 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-proxy-ca-bundles\") pod \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.858804 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v85lh\" (UniqueName: \"kubernetes.io/projected/ec6e0bc0-596b-4842-8381-4336ae8f54f4-kube-api-access-v85lh\") pod \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.858837 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-client-ca\") pod \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.858862 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-config\") pod \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.858979 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6e0bc0-596b-4842-8381-4336ae8f54f4-serving-cert\") pod \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.859514 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ec6e0bc0-596b-4842-8381-4336ae8f54f4" (UID: "ec6e0bc0-596b-4842-8381-4336ae8f54f4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.860148 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-client-ca" (OuterVolumeSpecName: "client-ca") pod "ec6e0bc0-596b-4842-8381-4336ae8f54f4" (UID: "ec6e0bc0-596b-4842-8381-4336ae8f54f4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.860523 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-config" (OuterVolumeSpecName: "config") pod "ec6e0bc0-596b-4842-8381-4336ae8f54f4" (UID: "ec6e0bc0-596b-4842-8381-4336ae8f54f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.862352 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6e0bc0-596b-4842-8381-4336ae8f54f4-kube-api-access-v85lh" (OuterVolumeSpecName: "kube-api-access-v85lh") pod "ec6e0bc0-596b-4842-8381-4336ae8f54f4" (UID: "ec6e0bc0-596b-4842-8381-4336ae8f54f4"). InnerVolumeSpecName "kube-api-access-v85lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.863901 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6e0bc0-596b-4842-8381-4336ae8f54f4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ec6e0bc0-596b-4842-8381-4336ae8f54f4" (UID: "ec6e0bc0-596b-4842-8381-4336ae8f54f4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.871090 4761 scope.go:117] "RemoveContainer" containerID="995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3" Mar 07 07:55:31 crc kubenswrapper[4761]: E0307 07:55:31.873008 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3\": container with ID starting with 995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3 not found: ID does not exist" containerID="995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.873052 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3"} err="failed to get container status \"995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3\": rpc error: code = NotFound desc = could not find container \"995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3\": container with ID starting with 995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3 not found: ID does not exist" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.961036 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6e0bc0-596b-4842-8381-4336ae8f54f4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.961440 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.961646 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v85lh\" (UniqueName: \"kubernetes.io/projected/ec6e0bc0-596b-4842-8381-4336ae8f54f4-kube-api-access-v85lh\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.961811 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.961981 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.177596 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-568f6f76cb-d44fh"] Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.184803 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-568f6f76cb-d44fh"] Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.540933 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq"] Mar 07 07:55:32 crc kubenswrapper[4761]: E0307 07:55:32.542224 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6e0bc0-596b-4842-8381-4336ae8f54f4" containerName="controller-manager" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.542254 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6e0bc0-596b-4842-8381-4336ae8f54f4" containerName="controller-manager" Mar 07 07:55:32 crc kubenswrapper[4761]: E0307 07:55:32.542286 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a047653-bc3d-49f6-a43e-79e45b8f8403" containerName="route-controller-manager" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.542296 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a047653-bc3d-49f6-a43e-79e45b8f8403" containerName="route-controller-manager" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.542449 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6e0bc0-596b-4842-8381-4336ae8f54f4" containerName="controller-manager" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.542467 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a047653-bc3d-49f6-a43e-79e45b8f8403" containerName="route-controller-manager" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.543040 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.549382 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k"] Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.550257 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.550432 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.551177 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.551858 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.551902 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.554816 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.554890 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.554822 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.555599 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.555900 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.555945 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.555946 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.556223 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.559811 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq"] Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.568657 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.574239 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k"] Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672041 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3667d397-4aef-4ee2-8571-8ee7c93c719b-serving-cert\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672135 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxmt2\" (UniqueName: \"kubernetes.io/projected/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-kube-api-access-gxmt2\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672181 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3667d397-4aef-4ee2-8571-8ee7c93c719b-config\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672206 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-config\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672239 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-proxy-ca-bundles\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672318 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-client-ca\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672342 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3667d397-4aef-4ee2-8571-8ee7c93c719b-client-ca\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672372 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jl2z\" (UniqueName: \"kubernetes.io/projected/3667d397-4aef-4ee2-8571-8ee7c93c719b-kube-api-access-8jl2z\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672395 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-serving-cert\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774271 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-config\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774352 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-proxy-ca-bundles\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774439 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-client-ca\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774484 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3667d397-4aef-4ee2-8571-8ee7c93c719b-client-ca\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774529 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jl2z\" (UniqueName: \"kubernetes.io/projected/3667d397-4aef-4ee2-8571-8ee7c93c719b-kube-api-access-8jl2z\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774570 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-serving-cert\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774659 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3667d397-4aef-4ee2-8571-8ee7c93c719b-serving-cert\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774811 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxmt2\" (UniqueName: \"kubernetes.io/projected/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-kube-api-access-gxmt2\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774879 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3667d397-4aef-4ee2-8571-8ee7c93c719b-config\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.776082 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-config\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.776139 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-proxy-ca-bundles\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.776338 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-client-ca\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.777004 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3667d397-4aef-4ee2-8571-8ee7c93c719b-client-ca\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.778548 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3667d397-4aef-4ee2-8571-8ee7c93c719b-config\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.782934 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-serving-cert\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.793985 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3667d397-4aef-4ee2-8571-8ee7c93c719b-serving-cert\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.807652 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jl2z\" (UniqueName: \"kubernetes.io/projected/3667d397-4aef-4ee2-8571-8ee7c93c719b-kube-api-access-8jl2z\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.808303 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxmt2\" (UniqueName: \"kubernetes.io/projected/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-kube-api-access-gxmt2\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.882667 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.885099 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.173853 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k"] Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.342015 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-z5qbh"] Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.344417 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.369505 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-z5qbh"] Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.373457 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq"] Mar 07 07:55:33 crc kubenswrapper[4761]: W0307 07:55:33.375330 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3667d397_4aef_4ee2_8571_8ee7c93c719b.slice/crio-0d4feef55d3f7879186e72c7897385b236d6f6fdf89c2b8537679bf8ce254352 WatchSource:0}: Error finding container 0d4feef55d3f7879186e72c7897385b236d6f6fdf89c2b8537679bf8ce254352: Status 404 returned error can't find the container with id 0d4feef55d3f7879186e72c7897385b236d6f6fdf89c2b8537679bf8ce254352 Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.384592 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-registry-tls\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.384650 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-bound-sa-token\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.384699 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.384817 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/506b9f2c-6502-4938-8e1d-243b8e02cc42-trusted-ca\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.384864 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/506b9f2c-6502-4938-8e1d-243b8e02cc42-installation-pull-secrets\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.384898 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/506b9f2c-6502-4938-8e1d-243b8e02cc42-registry-certificates\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.385009 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4qsq\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-kube-api-access-n4qsq\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.385071 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/506b9f2c-6502-4938-8e1d-243b8e02cc42-ca-trust-extracted\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.427446 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.486358 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-bound-sa-token\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.486772 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-registry-tls\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.486834 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/506b9f2c-6502-4938-8e1d-243b8e02cc42-trusted-ca\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.486859 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/506b9f2c-6502-4938-8e1d-243b8e02cc42-installation-pull-secrets\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.486889 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/506b9f2c-6502-4938-8e1d-243b8e02cc42-registry-certificates\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.486945 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4qsq\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-kube-api-access-n4qsq\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.486977 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/506b9f2c-6502-4938-8e1d-243b8e02cc42-ca-trust-extracted\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.487513 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/506b9f2c-6502-4938-8e1d-243b8e02cc42-ca-trust-extracted\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.488738 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/506b9f2c-6502-4938-8e1d-243b8e02cc42-trusted-ca\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.490384 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/506b9f2c-6502-4938-8e1d-243b8e02cc42-registry-certificates\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.492925 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-registry-tls\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.500735 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/506b9f2c-6502-4938-8e1d-243b8e02cc42-installation-pull-secrets\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.506010 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4qsq\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-kube-api-access-n4qsq\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.506475 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-bound-sa-token\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.712088 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a047653-bc3d-49f6-a43e-79e45b8f8403" path="/var/lib/kubelet/pods/6a047653-bc3d-49f6-a43e-79e45b8f8403/volumes" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.712698 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec6e0bc0-596b-4842-8381-4336ae8f54f4" path="/var/lib/kubelet/pods/ec6e0bc0-596b-4842-8381-4336ae8f54f4/volumes" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.733531 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.846864 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" event={"ID":"3667d397-4aef-4ee2-8571-8ee7c93c719b","Type":"ContainerStarted","Data":"a384d8e72abca7f94ea5bd0fc5a2b830afa37dc0ef04bd043411a226b34f720c"} Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.847223 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" event={"ID":"3667d397-4aef-4ee2-8571-8ee7c93c719b","Type":"ContainerStarted","Data":"0d4feef55d3f7879186e72c7897385b236d6f6fdf89c2b8537679bf8ce254352"} Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.847770 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.852163 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" event={"ID":"1abc2486-5f9c-4f0a-af63-365bcc4c1c61","Type":"ContainerStarted","Data":"d06988cae3b64334503e789d0e91e85389860e0eeeb5b563991a7021feb36127"} Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.852199 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" event={"ID":"1abc2486-5f9c-4f0a-af63-365bcc4c1c61","Type":"ContainerStarted","Data":"258e15c48260fa492b42187560256576f9635cd2a6bf8612a4da0160a3c1c365"} Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.852875 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.877999 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.878253 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podStartSLOduration=2.8782311590000003 podStartE2EDuration="2.878231159s" podCreationTimestamp="2026-03-07 07:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:55:33.875155454 +0000 UTC m=+390.784321929" watchObservedRunningTime="2026-03-07 07:55:33.878231159 +0000 UTC m=+390.787397634" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.897595 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podStartSLOduration=2.897578602 podStartE2EDuration="2.897578602s" podCreationTimestamp="2026-03-07 07:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:55:33.891184776 +0000 UTC m=+390.800351251" watchObservedRunningTime="2026-03-07 07:55:33.897578602 +0000 UTC m=+390.806745067" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.989017 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:34 crc kubenswrapper[4761]: I0307 07:55:34.143376 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-z5qbh"] Mar 07 07:55:34 crc kubenswrapper[4761]: W0307 07:55:34.154514 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod506b9f2c_6502_4938_8e1d_243b8e02cc42.slice/crio-538380e390799be14f8705f6a051274f4663d4b5eb7faeb67c4c44ab466193dd WatchSource:0}: Error finding container 538380e390799be14f8705f6a051274f4663d4b5eb7faeb67c4c44ab466193dd: Status 404 returned error can't find the container with id 538380e390799be14f8705f6a051274f4663d4b5eb7faeb67c4c44ab466193dd Mar 07 07:55:34 crc kubenswrapper[4761]: I0307 07:55:34.859810 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" event={"ID":"506b9f2c-6502-4938-8e1d-243b8e02cc42","Type":"ContainerStarted","Data":"d6525a793c4c8563a21b148eab59be40aaa802ab5c9a25162bfda621fce6693e"} Mar 07 07:55:34 crc kubenswrapper[4761]: I0307 07:55:34.860112 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" event={"ID":"506b9f2c-6502-4938-8e1d-243b8e02cc42","Type":"ContainerStarted","Data":"538380e390799be14f8705f6a051274f4663d4b5eb7faeb67c4c44ab466193dd"} Mar 07 07:55:34 crc kubenswrapper[4761]: I0307 07:55:34.882348 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" podStartSLOduration=1.882323466 podStartE2EDuration="1.882323466s" podCreationTimestamp="2026-03-07 07:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:55:34.879646513 +0000 UTC m=+391.788812978" watchObservedRunningTime="2026-03-07 07:55:34.882323466 +0000 UTC m=+391.791489971" Mar 07 07:55:35 crc kubenswrapper[4761]: I0307 07:55:35.867156 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.403111 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ztv97"] Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.405100 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ztv97" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="registry-server" containerID="cri-o://df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5" gracePeriod=30 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.416581 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-phm95"] Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.418678 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-phm95" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="registry-server" containerID="cri-o://64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd" gracePeriod=30 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.435484 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k4zfw"] Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.438412 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" containerID="cri-o://01fa76c6bfa57c63d1718130ceb65ce0ca778bd9ca727c6c0c1fc1223a507d8b" gracePeriod=30 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.452535 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zgvpf"] Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.453477 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.455557 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xc9s"] Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.458845 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2xc9s" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="registry-server" containerID="cri-o://fe015ee6272e042dae30cd2808b3678be315fe850f6c47c0467d20dbada1e9fb" gracePeriod=30 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.475053 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zbq9k"] Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.475563 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zbq9k" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="registry-server" containerID="cri-o://deaab19835705647b9d6b2f0a10fb31b5a897e6c428ca064bd6819f7542264ae" gracePeriod=30 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.476523 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zgvpf"] Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.640331 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.640370 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7dhh\" (UniqueName: \"kubernetes.io/projected/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-kube-api-access-b7dhh\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.640406 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.743392 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.743462 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7dhh\" (UniqueName: \"kubernetes.io/projected/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-kube-api-access-b7dhh\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.744157 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.745046 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.754249 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.765382 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7dhh\" (UniqueName: \"kubernetes.io/projected/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-kube-api-access-b7dhh\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.801149 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.918793 4761 generic.go:334] "Generic (PLEG): container finished" podID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerID="64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd" exitCode=0 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.918862 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phm95" event={"ID":"4601b717-e620-42a5-9f21-3b6fea1e71ff","Type":"ContainerDied","Data":"64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd"} Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.920963 4761 generic.go:334] "Generic (PLEG): container finished" podID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerID="deaab19835705647b9d6b2f0a10fb31b5a897e6c428ca064bd6819f7542264ae" exitCode=0 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.921019 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbq9k" event={"ID":"475b44c2-ce39-4d2c-b475-8a88c37a4d22","Type":"ContainerDied","Data":"deaab19835705647b9d6b2f0a10fb31b5a897e6c428ca064bd6819f7542264ae"} Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.922499 4761 generic.go:334] "Generic (PLEG): container finished" podID="69f8f788-a780-4cf1-9ef7-397428d61593" containerID="01fa76c6bfa57c63d1718130ceb65ce0ca778bd9ca727c6c0c1fc1223a507d8b" exitCode=0 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.922548 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" event={"ID":"69f8f788-a780-4cf1-9ef7-397428d61593","Type":"ContainerDied","Data":"01fa76c6bfa57c63d1718130ceb65ce0ca778bd9ca727c6c0c1fc1223a507d8b"} Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.922574 4761 scope.go:117] "RemoveContainer" containerID="2f2df3f61605050ff823689a3ab84881edb02d6979ac541c6c9979f7a1145713" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.929676 4761 generic.go:334] "Generic (PLEG): container finished" podID="e614b274-38db-4951-8f55-a09c49011cb5" containerID="fe015ee6272e042dae30cd2808b3678be315fe850f6c47c0467d20dbada1e9fb" exitCode=0 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.929843 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xc9s" event={"ID":"e614b274-38db-4951-8f55-a09c49011cb5","Type":"ContainerDied","Data":"fe015ee6272e042dae30cd2808b3678be315fe850f6c47c0467d20dbada1e9fb"} Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.933714 4761 generic.go:334] "Generic (PLEG): container finished" podID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerID="df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5" exitCode=0 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.933772 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztv97" event={"ID":"af0bdacc-ab60-43aa-adf2-86894b0896e3","Type":"ContainerDied","Data":"df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5"} Mar 07 07:55:42 crc kubenswrapper[4761]: E0307 07:55:42.971811 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd is running failed: container process not found" containerID="64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 07:55:42 crc kubenswrapper[4761]: E0307 07:55:42.972150 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd is running failed: container process not found" containerID="64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 07:55:42 crc kubenswrapper[4761]: E0307 07:55:42.972462 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd is running failed: container process not found" containerID="64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 07:55:42 crc kubenswrapper[4761]: E0307 07:55:42.972546 4761 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-phm95" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="registry-server" Mar 07 07:55:43 crc kubenswrapper[4761]: E0307 07:55:43.179581 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5 is running failed: container process not found" containerID="df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 07:55:43 crc kubenswrapper[4761]: E0307 07:55:43.180062 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5 is running failed: container process not found" containerID="df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 07:55:43 crc kubenswrapper[4761]: E0307 07:55:43.180432 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5 is running failed: container process not found" containerID="df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 07:55:43 crc kubenswrapper[4761]: E0307 07:55:43.180463 4761 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-ztv97" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="registry-server" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.234774 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zgvpf"] Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.370589 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.489026 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.555530 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-utilities\") pod \"af0bdacc-ab60-43aa-adf2-86894b0896e3\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.555586 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-catalog-content\") pod \"af0bdacc-ab60-43aa-adf2-86894b0896e3\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.555674 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjlrz\" (UniqueName: \"kubernetes.io/projected/af0bdacc-ab60-43aa-adf2-86894b0896e3-kube-api-access-bjlrz\") pod \"af0bdacc-ab60-43aa-adf2-86894b0896e3\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.556862 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-utilities" (OuterVolumeSpecName: "utilities") pod "af0bdacc-ab60-43aa-adf2-86894b0896e3" (UID: "af0bdacc-ab60-43aa-adf2-86894b0896e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.564536 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af0bdacc-ab60-43aa-adf2-86894b0896e3-kube-api-access-bjlrz" (OuterVolumeSpecName: "kube-api-access-bjlrz") pod "af0bdacc-ab60-43aa-adf2-86894b0896e3" (UID: "af0bdacc-ab60-43aa-adf2-86894b0896e3"). InnerVolumeSpecName "kube-api-access-bjlrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.615799 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af0bdacc-ab60-43aa-adf2-86894b0896e3" (UID: "af0bdacc-ab60-43aa-adf2-86894b0896e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.656754 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-trusted-ca\") pod \"69f8f788-a780-4cf1-9ef7-397428d61593\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.657075 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bws6p\" (UniqueName: \"kubernetes.io/projected/69f8f788-a780-4cf1-9ef7-397428d61593-kube-api-access-bws6p\") pod \"69f8f788-a780-4cf1-9ef7-397428d61593\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.657105 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-operator-metrics\") pod \"69f8f788-a780-4cf1-9ef7-397428d61593\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.657342 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjlrz\" (UniqueName: \"kubernetes.io/projected/af0bdacc-ab60-43aa-adf2-86894b0896e3-kube-api-access-bjlrz\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.657360 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.657371 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.657606 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "69f8f788-a780-4cf1-9ef7-397428d61593" (UID: "69f8f788-a780-4cf1-9ef7-397428d61593"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.662980 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "69f8f788-a780-4cf1-9ef7-397428d61593" (UID: "69f8f788-a780-4cf1-9ef7-397428d61593"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.667565 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f8f788-a780-4cf1-9ef7-397428d61593-kube-api-access-bws6p" (OuterVolumeSpecName: "kube-api-access-bws6p") pod "69f8f788-a780-4cf1-9ef7-397428d61593" (UID: "69f8f788-a780-4cf1-9ef7-397428d61593"). InnerVolumeSpecName "kube-api-access-bws6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.718029 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.729346 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.767252 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bws6p\" (UniqueName: \"kubernetes.io/projected/69f8f788-a780-4cf1-9ef7-397428d61593-kube-api-access-bws6p\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.767308 4761 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.767326 4761 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.779981 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.867887 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-utilities\") pod \"e614b274-38db-4951-8f55-a09c49011cb5\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.867963 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-catalog-content\") pod \"4601b717-e620-42a5-9f21-3b6fea1e71ff\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.867988 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-catalog-content\") pod \"e614b274-38db-4951-8f55-a09c49011cb5\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.868018 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr6qf\" (UniqueName: \"kubernetes.io/projected/e614b274-38db-4951-8f55-a09c49011cb5-kube-api-access-mr6qf\") pod \"e614b274-38db-4951-8f55-a09c49011cb5\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.868067 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxh7b\" (UniqueName: \"kubernetes.io/projected/4601b717-e620-42a5-9f21-3b6fea1e71ff-kube-api-access-pxh7b\") pod \"4601b717-e620-42a5-9f21-3b6fea1e71ff\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.868086 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-utilities\") pod \"4601b717-e620-42a5-9f21-3b6fea1e71ff\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.868513 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-utilities" (OuterVolumeSpecName: "utilities") pod "e614b274-38db-4951-8f55-a09c49011cb5" (UID: "e614b274-38db-4951-8f55-a09c49011cb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.869001 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-utilities" (OuterVolumeSpecName: "utilities") pod "4601b717-e620-42a5-9f21-3b6fea1e71ff" (UID: "4601b717-e620-42a5-9f21-3b6fea1e71ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.870596 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e614b274-38db-4951-8f55-a09c49011cb5-kube-api-access-mr6qf" (OuterVolumeSpecName: "kube-api-access-mr6qf") pod "e614b274-38db-4951-8f55-a09c49011cb5" (UID: "e614b274-38db-4951-8f55-a09c49011cb5"). InnerVolumeSpecName "kube-api-access-mr6qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.870682 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4601b717-e620-42a5-9f21-3b6fea1e71ff-kube-api-access-pxh7b" (OuterVolumeSpecName: "kube-api-access-pxh7b") pod "4601b717-e620-42a5-9f21-3b6fea1e71ff" (UID: "4601b717-e620-42a5-9f21-3b6fea1e71ff"). InnerVolumeSpecName "kube-api-access-pxh7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.893316 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e614b274-38db-4951-8f55-a09c49011cb5" (UID: "e614b274-38db-4951-8f55-a09c49011cb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.920419 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4601b717-e620-42a5-9f21-3b6fea1e71ff" (UID: "4601b717-e620-42a5-9f21-3b6fea1e71ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.940645 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbq9k" event={"ID":"475b44c2-ce39-4d2c-b475-8a88c37a4d22","Type":"ContainerDied","Data":"7cd404336db3278582f3f84c6dc0758504d6802e4eca15bb6c8c6727f6809d2e"} Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.940658 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.940687 4761 scope.go:117] "RemoveContainer" containerID="deaab19835705647b9d6b2f0a10fb31b5a897e6c428ca064bd6819f7542264ae" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.943997 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.944006 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" event={"ID":"69f8f788-a780-4cf1-9ef7-397428d61593","Type":"ContainerDied","Data":"c150a349c466aab661ebc693d49c15af1d9dfe7cb7614720742bde80d20f9114"} Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.948825 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xc9s" event={"ID":"e614b274-38db-4951-8f55-a09c49011cb5","Type":"ContainerDied","Data":"0f8a13c45f1b2417142f965fdcdde66f49582188f29393329d8613a807a1c1e7"} Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.948845 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.954812 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztv97" event={"ID":"af0bdacc-ab60-43aa-adf2-86894b0896e3","Type":"ContainerDied","Data":"380bf8edebb71ccc54dc5753c5a6aefa35966a99189fc44f9ac78aa54408029b"} Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.954889 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.956657 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" event={"ID":"2b3bce52-2720-4999-bf2f-f6808cd3a5fe","Type":"ContainerStarted","Data":"8f764198f67483ecf0a27d096e31c0cd1d72c91f45d7817e422650612af5f72b"} Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.961706 4761 scope.go:117] "RemoveContainer" containerID="32ae07b5efa72bd99f8ff659836fc71899a382e15730308f60ed8dcbc0efef86" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.963411 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phm95" event={"ID":"4601b717-e620-42a5-9f21-3b6fea1e71ff","Type":"ContainerDied","Data":"87669bb4bd1b22af2f1cf3323992c4d2932aba3177404dd39bc77b7522579d9f"} Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.963531 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.965888 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k4zfw"] Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.971086 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-catalog-content\") pod \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.971221 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9psrq\" (UniqueName: \"kubernetes.io/projected/475b44c2-ce39-4d2c-b475-8a88c37a4d22-kube-api-access-9psrq\") pod \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.971318 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-utilities\") pod \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.972558 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.972577 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.972594 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.972603 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr6qf\" (UniqueName: \"kubernetes.io/projected/e614b274-38db-4951-8f55-a09c49011cb5-kube-api-access-mr6qf\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.972612 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxh7b\" (UniqueName: \"kubernetes.io/projected/4601b717-e620-42a5-9f21-3b6fea1e71ff-kube-api-access-pxh7b\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.972621 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.973479 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-utilities" (OuterVolumeSpecName: "utilities") pod "475b44c2-ce39-4d2c-b475-8a88c37a4d22" (UID: "475b44c2-ce39-4d2c-b475-8a88c37a4d22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.981696 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k4zfw"] Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.982278 4761 scope.go:117] "RemoveContainer" containerID="36b36475a04f4bb5788ef4f132601b8eb14578495098f24a65c31ca99151024f" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.996136 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/475b44c2-ce39-4d2c-b475-8a88c37a4d22-kube-api-access-9psrq" (OuterVolumeSpecName: "kube-api-access-9psrq") pod "475b44c2-ce39-4d2c-b475-8a88c37a4d22" (UID: "475b44c2-ce39-4d2c-b475-8a88c37a4d22"). InnerVolumeSpecName "kube-api-access-9psrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.002634 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ztv97"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.006480 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ztv97"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.010429 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-phm95"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.014760 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-phm95"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.022595 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xc9s"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.024150 4761 scope.go:117] "RemoveContainer" containerID="01fa76c6bfa57c63d1718130ceb65ce0ca778bd9ca727c6c0c1fc1223a507d8b" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.027290 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xc9s"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.036091 4761 scope.go:117] "RemoveContainer" containerID="fe015ee6272e042dae30cd2808b3678be315fe850f6c47c0467d20dbada1e9fb" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.048696 4761 scope.go:117] "RemoveContainer" containerID="8c89fe962a9624c9fd29c25d7f382e97b0df9a7bc0c84cdb1adb1eda58732e73" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.073631 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9psrq\" (UniqueName: \"kubernetes.io/projected/475b44c2-ce39-4d2c-b475-8a88c37a4d22-kube-api-access-9psrq\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.073658 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.074251 4761 scope.go:117] "RemoveContainer" containerID="419e20e6925afae9dc0ba45b444441d41aa2f7cac8e1cd54262a4617cf13bfac" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.088621 4761 scope.go:117] "RemoveContainer" containerID="df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.101274 4761 scope.go:117] "RemoveContainer" containerID="3955eaf8cf0981f9e84cee368080bc45aa2a3fb2c6bacccd0b26b5e6cd9cd62b" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.114877 4761 scope.go:117] "RemoveContainer" containerID="1c3274c0a25c242c822a9a96600580a87d121dad2e64c3584b09930e252e967b" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.143296 4761 scope.go:117] "RemoveContainer" containerID="64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.162486 4761 scope.go:117] "RemoveContainer" containerID="d06af30b8ef444210e60a8b34a70a405dd866ea5ad35ffb9eb965e728d7b06de" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.167625 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "475b44c2-ce39-4d2c-b475-8a88c37a4d22" (UID: "475b44c2-ce39-4d2c-b475-8a88c37a4d22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.178076 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.180890 4761 scope.go:117] "RemoveContainer" containerID="79cf6c37dd3da83bfe64b281cbf9b5693aab6cce5559515b7d72605580520781" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.268559 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zbq9k"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.271279 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zbq9k"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.970000 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" event={"ID":"2b3bce52-2720-4999-bf2f-f6808cd3a5fe","Type":"ContainerStarted","Data":"b6193bd889ffe38e7587c3bf176f03324132a8fe93273085b2960f8bc71d2e62"} Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.972067 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.975027 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.988950 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podStartSLOduration=2.9889309649999998 podStartE2EDuration="2.988930965s" podCreationTimestamp="2026-03-07 07:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:55:44.987467554 +0000 UTC m=+401.896634069" watchObservedRunningTime="2026-03-07 07:55:44.988930965 +0000 UTC m=+401.898097440" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.275985 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b5t8f"] Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276225 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276238 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276250 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276257 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276266 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276273 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276279 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276285 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276294 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276300 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276310 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276316 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276339 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276345 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276354 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276360 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276369 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276375 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276381 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276386 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276393 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276399 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276408 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276414 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276421 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276427 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276509 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276519 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276529 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276536 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276546 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276637 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276644 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276755 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.277304 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.279319 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.286960 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5t8f"] Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.394546 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b26086-7428-4218-a5c0-64eb4a9d581f-utilities\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.394600 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b26086-7428-4218-a5c0-64eb4a9d581f-catalog-content\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.394737 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsrgr\" (UniqueName: \"kubernetes.io/projected/26b26086-7428-4218-a5c0-64eb4a9d581f-kube-api-access-hsrgr\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.496260 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b26086-7428-4218-a5c0-64eb4a9d581f-utilities\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.496315 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b26086-7428-4218-a5c0-64eb4a9d581f-catalog-content\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.496372 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsrgr\" (UniqueName: \"kubernetes.io/projected/26b26086-7428-4218-a5c0-64eb4a9d581f-kube-api-access-hsrgr\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.496911 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b26086-7428-4218-a5c0-64eb4a9d581f-utilities\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.496993 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b26086-7428-4218-a5c0-64eb4a9d581f-catalog-content\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.513285 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsrgr\" (UniqueName: \"kubernetes.io/projected/26b26086-7428-4218-a5c0-64eb4a9d581f-kube-api-access-hsrgr\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.594922 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.715080 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" path="/var/lib/kubelet/pods/4601b717-e620-42a5-9f21-3b6fea1e71ff/volumes" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.715828 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" path="/var/lib/kubelet/pods/475b44c2-ce39-4d2c-b475-8a88c37a4d22/volumes" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.716393 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" path="/var/lib/kubelet/pods/69f8f788-a780-4cf1-9ef7-397428d61593/volumes" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.717296 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" path="/var/lib/kubelet/pods/af0bdacc-ab60-43aa-adf2-86894b0896e3/volumes" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.718018 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e614b274-38db-4951-8f55-a09c49011cb5" path="/var/lib/kubelet/pods/e614b274-38db-4951-8f55-a09c49011cb5/volumes" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.879639 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5p7lw"] Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.881211 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.884503 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.894005 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5p7lw"] Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.975763 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5t8f"] Mar 07 07:55:45 crc kubenswrapper[4761]: W0307 07:55:45.990440 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26b26086_7428_4218_a5c0_64eb4a9d581f.slice/crio-9f059ea6e319a34cd4b6d8104882c67d4b5dda1ed3e63c48bae1845cc3e59368 WatchSource:0}: Error finding container 9f059ea6e319a34cd4b6d8104882c67d4b5dda1ed3e63c48bae1845cc3e59368: Status 404 returned error can't find the container with id 9f059ea6e319a34cd4b6d8104882c67d4b5dda1ed3e63c48bae1845cc3e59368 Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.001454 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc70d269-9a38-4cf3-a494-956420600965-catalog-content\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.001528 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc70d269-9a38-4cf3-a494-956420600965-utilities\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.001588 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9v5x\" (UniqueName: \"kubernetes.io/projected/dc70d269-9a38-4cf3-a494-956420600965-kube-api-access-k9v5x\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.103191 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9v5x\" (UniqueName: \"kubernetes.io/projected/dc70d269-9a38-4cf3-a494-956420600965-kube-api-access-k9v5x\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.103253 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc70d269-9a38-4cf3-a494-956420600965-catalog-content\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.103286 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc70d269-9a38-4cf3-a494-956420600965-utilities\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.103982 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc70d269-9a38-4cf3-a494-956420600965-catalog-content\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.104270 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc70d269-9a38-4cf3-a494-956420600965-utilities\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.126203 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9v5x\" (UniqueName: \"kubernetes.io/projected/dc70d269-9a38-4cf3-a494-956420600965-kube-api-access-k9v5x\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.208079 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.635151 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5p7lw"] Mar 07 07:55:46 crc kubenswrapper[4761]: W0307 07:55:46.643542 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc70d269_9a38_4cf3_a494_956420600965.slice/crio-68283d0512c160b0e30c61731a6ad16d82f1f310d4c685890fdc45a67f883186 WatchSource:0}: Error finding container 68283d0512c160b0e30c61731a6ad16d82f1f310d4c685890fdc45a67f883186: Status 404 returned error can't find the container with id 68283d0512c160b0e30c61731a6ad16d82f1f310d4c685890fdc45a67f883186 Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.002356 4761 generic.go:334] "Generic (PLEG): container finished" podID="26b26086-7428-4218-a5c0-64eb4a9d581f" containerID="c976579c83738f004eb56c1c0c608ce2d6e44d78a0632c195c7eba05d125770a" exitCode=0 Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.002451 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5t8f" event={"ID":"26b26086-7428-4218-a5c0-64eb4a9d581f","Type":"ContainerDied","Data":"c976579c83738f004eb56c1c0c608ce2d6e44d78a0632c195c7eba05d125770a"} Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.002490 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5t8f" event={"ID":"26b26086-7428-4218-a5c0-64eb4a9d581f","Type":"ContainerStarted","Data":"9f059ea6e319a34cd4b6d8104882c67d4b5dda1ed3e63c48bae1845cc3e59368"} Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.005797 4761 generic.go:334] "Generic (PLEG): container finished" podID="dc70d269-9a38-4cf3-a494-956420600965" containerID="9ab2456c2b668871f30ec1907cc7bebd12da2dfa7e5df9ed790faccf79723140" exitCode=0 Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.006439 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p7lw" event={"ID":"dc70d269-9a38-4cf3-a494-956420600965","Type":"ContainerDied","Data":"9ab2456c2b668871f30ec1907cc7bebd12da2dfa7e5df9ed790faccf79723140"} Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.006480 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p7lw" event={"ID":"dc70d269-9a38-4cf3-a494-956420600965","Type":"ContainerStarted","Data":"68283d0512c160b0e30c61731a6ad16d82f1f310d4c685890fdc45a67f883186"} Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.677923 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dbw8z"] Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.681942 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.685312 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.691290 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dbw8z"] Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.847031 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1f85b3-124d-434b-b053-4a24859497f1-catalog-content\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.847087 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm7wv\" (UniqueName: \"kubernetes.io/projected/de1f85b3-124d-434b-b053-4a24859497f1-kube-api-access-fm7wv\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.847639 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1f85b3-124d-434b-b053-4a24859497f1-utilities\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.949038 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1f85b3-124d-434b-b053-4a24859497f1-utilities\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.949113 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1f85b3-124d-434b-b053-4a24859497f1-catalog-content\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.949142 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm7wv\" (UniqueName: \"kubernetes.io/projected/de1f85b3-124d-434b-b053-4a24859497f1-kube-api-access-fm7wv\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.949599 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1f85b3-124d-434b-b053-4a24859497f1-utilities\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.949750 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1f85b3-124d-434b-b053-4a24859497f1-catalog-content\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.978198 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm7wv\" (UniqueName: \"kubernetes.io/projected/de1f85b3-124d-434b-b053-4a24859497f1-kube-api-access-fm7wv\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.011796 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.279618 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hqkkk"] Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.281589 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.292684 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.309158 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqkkk"] Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.425426 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dbw8z"] Mar 07 07:55:48 crc kubenswrapper[4761]: W0307 07:55:48.433129 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde1f85b3_124d_434b_b053_4a24859497f1.slice/crio-bb17c54b06d547054e4627f9eaf05d9d4e0c417c5c283c3a2f13cc92fea75b4e WatchSource:0}: Error finding container bb17c54b06d547054e4627f9eaf05d9d4e0c417c5c283c3a2f13cc92fea75b4e: Status 404 returned error can't find the container with id bb17c54b06d547054e4627f9eaf05d9d4e0c417c5c283c3a2f13cc92fea75b4e Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.456320 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x88kr\" (UniqueName: \"kubernetes.io/projected/b9d0650f-8057-46e1-a006-f240615ce96f-kube-api-access-x88kr\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.456416 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d0650f-8057-46e1-a006-f240615ce96f-catalog-content\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.457646 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d0650f-8057-46e1-a006-f240615ce96f-utilities\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.559549 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d0650f-8057-46e1-a006-f240615ce96f-utilities\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.559712 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x88kr\" (UniqueName: \"kubernetes.io/projected/b9d0650f-8057-46e1-a006-f240615ce96f-kube-api-access-x88kr\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.559803 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d0650f-8057-46e1-a006-f240615ce96f-catalog-content\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.560589 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d0650f-8057-46e1-a006-f240615ce96f-catalog-content\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.560637 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d0650f-8057-46e1-a006-f240615ce96f-utilities\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.585103 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x88kr\" (UniqueName: \"kubernetes.io/projected/b9d0650f-8057-46e1-a006-f240615ce96f-kube-api-access-x88kr\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.611815 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:49 crc kubenswrapper[4761]: I0307 07:55:49.020925 4761 generic.go:334] "Generic (PLEG): container finished" podID="de1f85b3-124d-434b-b053-4a24859497f1" containerID="339648318a6e9b5427143bceeceba292c75d1c67e771b55811cecaa930f9a3dd" exitCode=0 Mar 07 07:55:49 crc kubenswrapper[4761]: I0307 07:55:49.020972 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbw8z" event={"ID":"de1f85b3-124d-434b-b053-4a24859497f1","Type":"ContainerDied","Data":"339648318a6e9b5427143bceeceba292c75d1c67e771b55811cecaa930f9a3dd"} Mar 07 07:55:49 crc kubenswrapper[4761]: I0307 07:55:49.021001 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbw8z" event={"ID":"de1f85b3-124d-434b-b053-4a24859497f1","Type":"ContainerStarted","Data":"bb17c54b06d547054e4627f9eaf05d9d4e0c417c5c283c3a2f13cc92fea75b4e"} Mar 07 07:55:49 crc kubenswrapper[4761]: I0307 07:55:49.055922 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqkkk"] Mar 07 07:55:50 crc kubenswrapper[4761]: I0307 07:55:50.026592 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqkkk" event={"ID":"b9d0650f-8057-46e1-a006-f240615ce96f","Type":"ContainerStarted","Data":"02341b3ca61321da31fc7e5f39cfc880c16400e83089dc102b965fb94ad0a93c"} Mar 07 07:55:51 crc kubenswrapper[4761]: I0307 07:55:51.037352 4761 generic.go:334] "Generic (PLEG): container finished" podID="b9d0650f-8057-46e1-a006-f240615ce96f" containerID="a5af5bfa395578300341bd08d6bb60c913fdfeca43221d253ef215beda8b84fa" exitCode=0 Mar 07 07:55:51 crc kubenswrapper[4761]: I0307 07:55:51.037644 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqkkk" event={"ID":"b9d0650f-8057-46e1-a006-f240615ce96f","Type":"ContainerDied","Data":"a5af5bfa395578300341bd08d6bb60c913fdfeca43221d253ef215beda8b84fa"} Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.045653 4761 generic.go:334] "Generic (PLEG): container finished" podID="26b26086-7428-4218-a5c0-64eb4a9d581f" containerID="b494db4d849900a3b7c015894e80c18c7400ac34baba4a3097d723c6ca2e8a22" exitCode=0 Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.045705 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5t8f" event={"ID":"26b26086-7428-4218-a5c0-64eb4a9d581f","Type":"ContainerDied","Data":"b494db4d849900a3b7c015894e80c18c7400ac34baba4a3097d723c6ca2e8a22"} Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.049210 4761 generic.go:334] "Generic (PLEG): container finished" podID="dc70d269-9a38-4cf3-a494-956420600965" containerID="a19bd8009a9586f8eb73f42944a165ed1b1f12911fce67d23f8514c0d264d4a7" exitCode=0 Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.049282 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p7lw" event={"ID":"dc70d269-9a38-4cf3-a494-956420600965","Type":"ContainerDied","Data":"a19bd8009a9586f8eb73f42944a165ed1b1f12911fce67d23f8514c0d264d4a7"} Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.053469 4761 generic.go:334] "Generic (PLEG): container finished" podID="b9d0650f-8057-46e1-a006-f240615ce96f" containerID="0d5f077b0e45c87e62712abff63f9bce05935bd55cdf3e271102626161fa9726" exitCode=0 Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.053536 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqkkk" event={"ID":"b9d0650f-8057-46e1-a006-f240615ce96f","Type":"ContainerDied","Data":"0d5f077b0e45c87e62712abff63f9bce05935bd55cdf3e271102626161fa9726"} Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.058772 4761 generic.go:334] "Generic (PLEG): container finished" podID="de1f85b3-124d-434b-b053-4a24859497f1" containerID="a6c173bed6bc51cc797de2dc74a10b5b1aecd189094414c2defc77f7109520ef" exitCode=0 Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.058813 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbw8z" event={"ID":"de1f85b3-124d-434b-b053-4a24859497f1","Type":"ContainerDied","Data":"a6c173bed6bc51cc797de2dc74a10b5b1aecd189094414c2defc77f7109520ef"} Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.066985 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p7lw" event={"ID":"dc70d269-9a38-4cf3-a494-956420600965","Type":"ContainerStarted","Data":"bd5707b177f5f63452dd4db09c3f49b080214cd3b09c8c8b3a9b9133ff30491d"} Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.070427 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqkkk" event={"ID":"b9d0650f-8057-46e1-a006-f240615ce96f","Type":"ContainerStarted","Data":"b03d343e6201f83b6553940feb6351a63b74fda7c742539dcf69f913033f4b35"} Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.072288 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbw8z" event={"ID":"de1f85b3-124d-434b-b053-4a24859497f1","Type":"ContainerStarted","Data":"8531b390e8fdfb11e957a004d6b144d8f26f8d901cad4c5c9163151fb4493a34"} Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.074235 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5t8f" event={"ID":"26b26086-7428-4218-a5c0-64eb4a9d581f","Type":"ContainerStarted","Data":"0a7ced53f43926cd92263ecddcd09dedfc4ca9e7b74f1b95f81fd58e369ec1fd"} Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.092392 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5p7lw" podStartSLOduration=3.505772387 podStartE2EDuration="8.092379007s" podCreationTimestamp="2026-03-07 07:55:45 +0000 UTC" firstStartedPulling="2026-03-07 07:55:48.011360085 +0000 UTC m=+404.920526550" lastFinishedPulling="2026-03-07 07:55:52.597966655 +0000 UTC m=+409.507133170" observedRunningTime="2026-03-07 07:55:53.091260686 +0000 UTC m=+410.000427181" watchObservedRunningTime="2026-03-07 07:55:53.092379007 +0000 UTC m=+410.001545482" Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.116710 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dbw8z" podStartSLOduration=3.019535547 podStartE2EDuration="6.116690326s" podCreationTimestamp="2026-03-07 07:55:47 +0000 UTC" firstStartedPulling="2026-03-07 07:55:49.433820461 +0000 UTC m=+406.342986946" lastFinishedPulling="2026-03-07 07:55:52.53097524 +0000 UTC m=+409.440141725" observedRunningTime="2026-03-07 07:55:53.111993437 +0000 UTC m=+410.021159912" watchObservedRunningTime="2026-03-07 07:55:53.116690326 +0000 UTC m=+410.025856801" Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.133825 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b5t8f" podStartSLOduration=2.655540957 podStartE2EDuration="8.133807408s" podCreationTimestamp="2026-03-07 07:55:45 +0000 UTC" firstStartedPulling="2026-03-07 07:55:47.004295016 +0000 UTC m=+403.913461531" lastFinishedPulling="2026-03-07 07:55:52.482561487 +0000 UTC m=+409.391727982" observedRunningTime="2026-03-07 07:55:53.130460996 +0000 UTC m=+410.039627531" watchObservedRunningTime="2026-03-07 07:55:53.133807408 +0000 UTC m=+410.042973883" Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.737883 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.760830 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hqkkk" podStartSLOduration=4.359189549 podStartE2EDuration="5.760804852s" podCreationTimestamp="2026-03-07 07:55:48 +0000 UTC" firstStartedPulling="2026-03-07 07:55:51.062854136 +0000 UTC m=+407.972020631" lastFinishedPulling="2026-03-07 07:55:52.464469449 +0000 UTC m=+409.373635934" observedRunningTime="2026-03-07 07:55:53.149350996 +0000 UTC m=+410.058517471" watchObservedRunningTime="2026-03-07 07:55:53.760804852 +0000 UTC m=+410.669971347" Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.799748 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ls7db"] Mar 07 07:55:55 crc kubenswrapper[4761]: I0307 07:55:55.595097 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:55 crc kubenswrapper[4761]: I0307 07:55:55.596663 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:55 crc kubenswrapper[4761]: I0307 07:55:55.653364 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:56 crc kubenswrapper[4761]: I0307 07:55:56.209404 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:56 crc kubenswrapper[4761]: I0307 07:55:56.209514 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:57 crc kubenswrapper[4761]: I0307 07:55:57.161222 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:57 crc kubenswrapper[4761]: I0307 07:55:57.269298 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5p7lw" podUID="dc70d269-9a38-4cf3-a494-956420600965" containerName="registry-server" probeResult="failure" output=< Mar 07 07:55:57 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 07:55:57 crc kubenswrapper[4761]: > Mar 07 07:55:58 crc kubenswrapper[4761]: I0307 07:55:58.012689 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:58 crc kubenswrapper[4761]: I0307 07:55:58.012769 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:58 crc kubenswrapper[4761]: I0307 07:55:58.085410 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:58 crc kubenswrapper[4761]: I0307 07:55:58.146424 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:58 crc kubenswrapper[4761]: I0307 07:55:58.611946 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:58 crc kubenswrapper[4761]: I0307 07:55:58.612216 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:58 crc kubenswrapper[4761]: I0307 07:55:58.676328 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:59 crc kubenswrapper[4761]: I0307 07:55:59.173405 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.145972 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547836-m94k2"] Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.147118 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547836-m94k2" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.151576 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.153095 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.155271 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.163610 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547836-m94k2"] Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.323222 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jwlc\" (UniqueName: \"kubernetes.io/projected/4b65e7bf-925a-4cb6-b384-de21cbf6c795-kube-api-access-7jwlc\") pod \"auto-csr-approver-29547836-m94k2\" (UID: \"4b65e7bf-925a-4cb6-b384-de21cbf6c795\") " pod="openshift-infra/auto-csr-approver-29547836-m94k2" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.424836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jwlc\" (UniqueName: \"kubernetes.io/projected/4b65e7bf-925a-4cb6-b384-de21cbf6c795-kube-api-access-7jwlc\") pod \"auto-csr-approver-29547836-m94k2\" (UID: \"4b65e7bf-925a-4cb6-b384-de21cbf6c795\") " pod="openshift-infra/auto-csr-approver-29547836-m94k2" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.447465 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jwlc\" (UniqueName: \"kubernetes.io/projected/4b65e7bf-925a-4cb6-b384-de21cbf6c795-kube-api-access-7jwlc\") pod \"auto-csr-approver-29547836-m94k2\" (UID: \"4b65e7bf-925a-4cb6-b384-de21cbf6c795\") " pod="openshift-infra/auto-csr-approver-29547836-m94k2" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.479330 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547836-m94k2" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.875179 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547836-m94k2"] Mar 07 07:56:00 crc kubenswrapper[4761]: W0307 07:56:00.881250 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b65e7bf_925a_4cb6_b384_de21cbf6c795.slice/crio-20587130d533f64873dd7d8574b2050a21d84e63e601799c392e29df725a367f WatchSource:0}: Error finding container 20587130d533f64873dd7d8574b2050a21d84e63e601799c392e29df725a367f: Status 404 returned error can't find the container with id 20587130d533f64873dd7d8574b2050a21d84e63e601799c392e29df725a367f Mar 07 07:56:01 crc kubenswrapper[4761]: I0307 07:56:01.121122 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547836-m94k2" event={"ID":"4b65e7bf-925a-4cb6-b384-de21cbf6c795","Type":"ContainerStarted","Data":"20587130d533f64873dd7d8574b2050a21d84e63e601799c392e29df725a367f"} Mar 07 07:56:03 crc kubenswrapper[4761]: I0307 07:56:03.132035 4761 generic.go:334] "Generic (PLEG): container finished" podID="4b65e7bf-925a-4cb6-b384-de21cbf6c795" containerID="3cb1b46082fb3b84b3f8cd834240e3995889dc15c1e02d44fee9edb19c7303c1" exitCode=0 Mar 07 07:56:03 crc kubenswrapper[4761]: I0307 07:56:03.132136 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547836-m94k2" event={"ID":"4b65e7bf-925a-4cb6-b384-de21cbf6c795","Type":"ContainerDied","Data":"3cb1b46082fb3b84b3f8cd834240e3995889dc15c1e02d44fee9edb19c7303c1"} Mar 07 07:56:04 crc kubenswrapper[4761]: I0307 07:56:04.640882 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547836-m94k2" Mar 07 07:56:04 crc kubenswrapper[4761]: I0307 07:56:04.738560 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jwlc\" (UniqueName: \"kubernetes.io/projected/4b65e7bf-925a-4cb6-b384-de21cbf6c795-kube-api-access-7jwlc\") pod \"4b65e7bf-925a-4cb6-b384-de21cbf6c795\" (UID: \"4b65e7bf-925a-4cb6-b384-de21cbf6c795\") " Mar 07 07:56:04 crc kubenswrapper[4761]: I0307 07:56:04.745750 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b65e7bf-925a-4cb6-b384-de21cbf6c795-kube-api-access-7jwlc" (OuterVolumeSpecName: "kube-api-access-7jwlc") pod "4b65e7bf-925a-4cb6-b384-de21cbf6c795" (UID: "4b65e7bf-925a-4cb6-b384-de21cbf6c795"). InnerVolumeSpecName "kube-api-access-7jwlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:56:04 crc kubenswrapper[4761]: I0307 07:56:04.840627 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jwlc\" (UniqueName: \"kubernetes.io/projected/4b65e7bf-925a-4cb6-b384-de21cbf6c795-kube-api-access-7jwlc\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:05 crc kubenswrapper[4761]: I0307 07:56:05.149476 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547836-m94k2" event={"ID":"4b65e7bf-925a-4cb6-b384-de21cbf6c795","Type":"ContainerDied","Data":"20587130d533f64873dd7d8574b2050a21d84e63e601799c392e29df725a367f"} Mar 07 07:56:05 crc kubenswrapper[4761]: I0307 07:56:05.149521 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20587130d533f64873dd7d8574b2050a21d84e63e601799c392e29df725a367f" Mar 07 07:56:05 crc kubenswrapper[4761]: I0307 07:56:05.149544 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547836-m94k2" Mar 07 07:56:06 crc kubenswrapper[4761]: I0307 07:56:06.276088 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:56:06 crc kubenswrapper[4761]: I0307 07:56:06.334267 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:56:18 crc kubenswrapper[4761]: I0307 07:56:18.839248 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" podUID="473ecd8c-4e56-40ac-9444-2d43490c6424" containerName="registry" containerID="cri-o://afb1c01b59eedebf7cd675c015291a324f75150f230ac021a30df9dfdc7a88b6" gracePeriod=30 Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.238911 4761 generic.go:334] "Generic (PLEG): container finished" podID="473ecd8c-4e56-40ac-9444-2d43490c6424" containerID="afb1c01b59eedebf7cd675c015291a324f75150f230ac021a30df9dfdc7a88b6" exitCode=0 Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.239052 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" event={"ID":"473ecd8c-4e56-40ac-9444-2d43490c6424","Type":"ContainerDied","Data":"afb1c01b59eedebf7cd675c015291a324f75150f230ac021a30df9dfdc7a88b6"} Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.273323 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.425900 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-bound-sa-token\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.425958 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/473ecd8c-4e56-40ac-9444-2d43490c6424-ca-trust-extracted\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.425996 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-trusted-ca\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.426013 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-tls\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.426037 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99v7b\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-kube-api-access-99v7b\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.426057 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/473ecd8c-4e56-40ac-9444-2d43490c6424-installation-pull-secrets\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.426292 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.426635 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-certificates\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.427422 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.427888 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.433204 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.441630 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/473ecd8c-4e56-40ac-9444-2d43490c6424-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.448029 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/473ecd8c-4e56-40ac-9444-2d43490c6424-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.448184 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.450605 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.453254 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-kube-api-access-99v7b" (OuterVolumeSpecName: "kube-api-access-99v7b") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "kube-api-access-99v7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.527544 4761 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.527610 4761 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.527619 4761 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/473ecd8c-4e56-40ac-9444-2d43490c6424-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.527629 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.527637 4761 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.527646 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99v7b\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-kube-api-access-99v7b\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.527655 4761 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/473ecd8c-4e56-40ac-9444-2d43490c6424-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:20 crc kubenswrapper[4761]: I0307 07:56:20.246405 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" event={"ID":"473ecd8c-4e56-40ac-9444-2d43490c6424","Type":"ContainerDied","Data":"4dd05b87400e520fab187d8e6fc531d0b912721b961e43f10251c6818333d374"} Mar 07 07:56:20 crc kubenswrapper[4761]: I0307 07:56:20.246472 4761 scope.go:117] "RemoveContainer" containerID="afb1c01b59eedebf7cd675c015291a324f75150f230ac021a30df9dfdc7a88b6" Mar 07 07:56:20 crc kubenswrapper[4761]: I0307 07:56:20.246491 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:56:20 crc kubenswrapper[4761]: I0307 07:56:20.266930 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ls7db"] Mar 07 07:56:20 crc kubenswrapper[4761]: I0307 07:56:20.271709 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ls7db"] Mar 07 07:56:21 crc kubenswrapper[4761]: I0307 07:56:21.713102 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="473ecd8c-4e56-40ac-9444-2d43490c6424" path="/var/lib/kubelet/pods/473ecd8c-4e56-40ac-9444-2d43490c6424/volumes" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.391978 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9"] Mar 07 07:56:36 crc kubenswrapper[4761]: E0307 07:56:36.392970 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473ecd8c-4e56-40ac-9444-2d43490c6424" containerName="registry" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.392992 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="473ecd8c-4e56-40ac-9444-2d43490c6424" containerName="registry" Mar 07 07:56:36 crc kubenswrapper[4761]: E0307 07:56:36.393027 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b65e7bf-925a-4cb6-b384-de21cbf6c795" containerName="oc" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.393040 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b65e7bf-925a-4cb6-b384-de21cbf6c795" containerName="oc" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.393210 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b65e7bf-925a-4cb6-b384-de21cbf6c795" containerName="oc" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.393237 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="473ecd8c-4e56-40ac-9444-2d43490c6424" containerName="registry" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.393853 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.398038 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.398110 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.398456 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.399044 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.399257 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.408663 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9"] Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.477398 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/23bba90e-efcc-4b5a-8793-10887291b848-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.477583 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrmm2\" (UniqueName: \"kubernetes.io/projected/23bba90e-efcc-4b5a-8793-10887291b848-kube-api-access-jrmm2\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.477632 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/23bba90e-efcc-4b5a-8793-10887291b848-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.580334 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/23bba90e-efcc-4b5a-8793-10887291b848-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.580675 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrmm2\" (UniqueName: \"kubernetes.io/projected/23bba90e-efcc-4b5a-8793-10887291b848-kube-api-access-jrmm2\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.580836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/23bba90e-efcc-4b5a-8793-10887291b848-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.583362 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/23bba90e-efcc-4b5a-8793-10887291b848-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.595386 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/23bba90e-efcc-4b5a-8793-10887291b848-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.615788 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrmm2\" (UniqueName: \"kubernetes.io/projected/23bba90e-efcc-4b5a-8793-10887291b848-kube-api-access-jrmm2\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.721489 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.991407 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9"] Mar 07 07:56:36 crc kubenswrapper[4761]: W0307 07:56:36.998468 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23bba90e_efcc_4b5a_8793_10887291b848.slice/crio-7f314366784516532945ebd7ac995227be1baef93e060ef54c011ec2fd7905fa WatchSource:0}: Error finding container 7f314366784516532945ebd7ac995227be1baef93e060ef54c011ec2fd7905fa: Status 404 returned error can't find the container with id 7f314366784516532945ebd7ac995227be1baef93e060ef54c011ec2fd7905fa Mar 07 07:56:37 crc kubenswrapper[4761]: I0307 07:56:37.356087 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" event={"ID":"23bba90e-efcc-4b5a-8793-10887291b848","Type":"ContainerStarted","Data":"7f314366784516532945ebd7ac995227be1baef93e060ef54c011ec2fd7905fa"} Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.368350 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" event={"ID":"23bba90e-efcc-4b5a-8793-10887291b848","Type":"ContainerStarted","Data":"a6a211330bdcc35ab7b67369b69fefb098e58e40fe63953c33d47f9a70d6f510"} Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.395446 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" podStartSLOduration=1.593805849 podStartE2EDuration="3.395428172s" podCreationTimestamp="2026-03-07 07:56:36 +0000 UTC" firstStartedPulling="2026-03-07 07:56:37.010126423 +0000 UTC m=+453.919292938" lastFinishedPulling="2026-03-07 07:56:38.811748786 +0000 UTC m=+455.720915261" observedRunningTime="2026-03-07 07:56:39.389060481 +0000 UTC m=+456.298226956" watchObservedRunningTime="2026-03-07 07:56:39.395428172 +0000 UTC m=+456.304594647" Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.542589 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6"] Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.543318 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.548134 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-x78b7" Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.548523 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.556527 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d29980e5-d546-4d88-9ff3-1ee39ddda37c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lr6b6\" (UID: \"d29980e5-d546-4d88-9ff3-1ee39ddda37c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.563315 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6"] Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.657999 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d29980e5-d546-4d88-9ff3-1ee39ddda37c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lr6b6\" (UID: \"d29980e5-d546-4d88-9ff3-1ee39ddda37c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:39 crc kubenswrapper[4761]: E0307 07:56:39.658155 4761 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Mar 07 07:56:39 crc kubenswrapper[4761]: E0307 07:56:39.658226 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d29980e5-d546-4d88-9ff3-1ee39ddda37c-tls-certificates podName:d29980e5-d546-4d88-9ff3-1ee39ddda37c nodeName:}" failed. No retries permitted until 2026-03-07 07:56:40.158206297 +0000 UTC m=+457.067372772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/d29980e5-d546-4d88-9ff3-1ee39ddda37c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-lr6b6" (UID: "d29980e5-d546-4d88-9ff3-1ee39ddda37c") : secret "prometheus-operator-admission-webhook-tls" not found Mar 07 07:56:40 crc kubenswrapper[4761]: I0307 07:56:40.164998 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d29980e5-d546-4d88-9ff3-1ee39ddda37c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lr6b6\" (UID: \"d29980e5-d546-4d88-9ff3-1ee39ddda37c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:40 crc kubenswrapper[4761]: I0307 07:56:40.174854 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d29980e5-d546-4d88-9ff3-1ee39ddda37c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lr6b6\" (UID: \"d29980e5-d546-4d88-9ff3-1ee39ddda37c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:40 crc kubenswrapper[4761]: I0307 07:56:40.454391 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:41 crc kubenswrapper[4761]: I0307 07:56:40.971882 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6"] Mar 07 07:56:41 crc kubenswrapper[4761]: W0307 07:56:40.989950 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd29980e5_d546_4d88_9ff3_1ee39ddda37c.slice/crio-f0b082f7dd45040ab2bc6626b212d7dc9eb0baf327d27409bbb0b84dbafc71b0 WatchSource:0}: Error finding container f0b082f7dd45040ab2bc6626b212d7dc9eb0baf327d27409bbb0b84dbafc71b0: Status 404 returned error can't find the container with id f0b082f7dd45040ab2bc6626b212d7dc9eb0baf327d27409bbb0b84dbafc71b0 Mar 07 07:56:41 crc kubenswrapper[4761]: I0307 07:56:41.383744 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" event={"ID":"d29980e5-d546-4d88-9ff3-1ee39ddda37c","Type":"ContainerStarted","Data":"f0b082f7dd45040ab2bc6626b212d7dc9eb0baf327d27409bbb0b84dbafc71b0"} Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.401666 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" event={"ID":"d29980e5-d546-4d88-9ff3-1ee39ddda37c","Type":"ContainerStarted","Data":"135b390898a3a827582358761059bcc63765210ef6cca72bb8fb26dcfd8484b3"} Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.402115 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.412792 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.425961 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podStartSLOduration=3.032833966 podStartE2EDuration="4.425934504s" podCreationTimestamp="2026-03-07 07:56:39 +0000 UTC" firstStartedPulling="2026-03-07 07:56:40.993003775 +0000 UTC m=+457.902170290" lastFinishedPulling="2026-03-07 07:56:42.386104353 +0000 UTC m=+459.295270828" observedRunningTime="2026-03-07 07:56:43.422413389 +0000 UTC m=+460.331579934" watchObservedRunningTime="2026-03-07 07:56:43.425934504 +0000 UTC m=+460.335101019" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.600376 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-d9r6v"] Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.601336 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.602804 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.603088 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-6j7gl" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.603488 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.603659 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.632511 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.632612 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpg9f\" (UniqueName: \"kubernetes.io/projected/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-kube-api-access-qpg9f\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.632640 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-metrics-client-ca\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.632690 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.653581 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-d9r6v"] Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.734499 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.734603 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: E0307 07:56:43.734826 4761 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.734859 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpg9f\" (UniqueName: \"kubernetes.io/projected/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-kube-api-access-qpg9f\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: E0307 07:56:43.734922 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-tls podName:2bb1edf7-d220-41cc-861c-b3ae4ea51d89 nodeName:}" failed. No retries permitted until 2026-03-07 07:56:44.234892894 +0000 UTC m=+461.144059389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-tls") pod "prometheus-operator-db54df47d-d9r6v" (UID: "2bb1edf7-d220-41cc-861c-b3ae4ea51d89") : secret "prometheus-operator-tls" not found Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.734965 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-metrics-client-ca\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.736404 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-metrics-client-ca\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.758584 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.769371 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.769448 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.770946 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpg9f\" (UniqueName: \"kubernetes.io/projected/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-kube-api-access-qpg9f\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:44 crc kubenswrapper[4761]: I0307 07:56:44.241426 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:44 crc kubenswrapper[4761]: I0307 07:56:44.247737 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:44 crc kubenswrapper[4761]: I0307 07:56:44.519579 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:44 crc kubenswrapper[4761]: I0307 07:56:44.791106 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-d9r6v"] Mar 07 07:56:45 crc kubenswrapper[4761]: I0307 07:56:45.417516 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" event={"ID":"2bb1edf7-d220-41cc-861c-b3ae4ea51d89","Type":"ContainerStarted","Data":"41c10613fd4dc2bf38f1d6e1952292094ba5e963311700f04d763e247a2e197e"} Mar 07 07:56:46 crc kubenswrapper[4761]: I0307 07:56:46.435311 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" event={"ID":"2bb1edf7-d220-41cc-861c-b3ae4ea51d89","Type":"ContainerStarted","Data":"0c05a8e323daf28c2e6e63f440c7c78a7710ef67cdb6ee5c0b749dd00c0d3981"} Mar 07 07:56:47 crc kubenswrapper[4761]: I0307 07:56:47.446523 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" event={"ID":"2bb1edf7-d220-41cc-861c-b3ae4ea51d89","Type":"ContainerStarted","Data":"ca4a88246f4c42a4c7aa492bd727a0393ea3eb9821011f824a00d9d0847893ea"} Mar 07 07:56:47 crc kubenswrapper[4761]: I0307 07:56:47.475359 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" podStartSLOduration=3.057871053 podStartE2EDuration="4.475327812s" podCreationTimestamp="2026-03-07 07:56:43 +0000 UTC" firstStartedPulling="2026-03-07 07:56:44.80404178 +0000 UTC m=+461.713208275" lastFinishedPulling="2026-03-07 07:56:46.221498559 +0000 UTC m=+463.130665034" observedRunningTime="2026-03-07 07:56:47.473120652 +0000 UTC m=+464.382287167" watchObservedRunningTime="2026-03-07 07:56:47.475327812 +0000 UTC m=+464.384494327" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.963114 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2"] Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.964642 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.966173 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.966302 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.966480 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-25w2m" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.985687 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2"] Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.990216 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk"] Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.991455 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.994547 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.994590 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.994906 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-w8grw" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.995075 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.998113 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk"] Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.001110 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-rrnkf"] Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.002455 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.004369 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.004570 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.006020 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mkmf\" (UniqueName: \"kubernetes.io/projected/ef93a4b3-029b-4caf-9b8b-14595f247c7f-kube-api-access-7mkmf\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.006116 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.006173 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.006208 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef93a4b3-029b-4caf-9b8b-14595f247c7f-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.006424 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-hswcb" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.107827 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-sys\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.107879 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlrw9\" (UniqueName: \"kubernetes.io/projected/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-kube-api-access-hlrw9\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.107921 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: E0307 07:56:49.108015 4761 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108027 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fc672282-6000-48f1-bd85-a192c0a352a2-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: E0307 07:56:49.108065 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-tls podName:ef93a4b3-029b-4caf-9b8b-14595f247c7f nodeName:}" failed. No retries permitted until 2026-03-07 07:56:49.608049516 +0000 UTC m=+466.517215991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-8tgp2" (UID: "ef93a4b3-029b-4caf-9b8b-14595f247c7f") : secret "openshift-state-metrics-tls" not found Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108079 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108164 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-wtmp\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108292 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108346 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108379 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108420 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-tls\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108453 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/fc672282-6000-48f1-bd85-a192c0a352a2-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108486 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef93a4b3-029b-4caf-9b8b-14595f247c7f-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108531 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhgzm\" (UniqueName: \"kubernetes.io/projected/fc672282-6000-48f1-bd85-a192c0a352a2-kube-api-access-mhgzm\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108569 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-textfile\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108600 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108647 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mkmf\" (UniqueName: \"kubernetes.io/projected/ef93a4b3-029b-4caf-9b8b-14595f247c7f-kube-api-access-7mkmf\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108734 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-root\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108762 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-metrics-client-ca\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.109881 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef93a4b3-029b-4caf-9b8b-14595f247c7f-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.113662 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.137216 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mkmf\" (UniqueName: \"kubernetes.io/projected/ef93a4b3-029b-4caf-9b8b-14595f247c7f-kube-api-access-7mkmf\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209425 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209469 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209489 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/fc672282-6000-48f1-bd85-a192c0a352a2-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209508 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-tls\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209531 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhgzm\" (UniqueName: \"kubernetes.io/projected/fc672282-6000-48f1-bd85-a192c0a352a2-kube-api-access-mhgzm\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209549 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-textfile\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209567 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209597 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-root\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209611 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-metrics-client-ca\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209625 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-sys\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209641 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlrw9\" (UniqueName: \"kubernetes.io/projected/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-kube-api-access-hlrw9\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209684 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fc672282-6000-48f1-bd85-a192c0a352a2-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209702 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209731 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-wtmp\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209879 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-wtmp\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.210477 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-sys\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.210554 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-root\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: E0307 07:56:49.211105 4761 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.211144 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-textfile\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: E0307 07:56:49.211163 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-tls podName:fc672282-6000-48f1-bd85-a192c0a352a2 nodeName:}" failed. No retries permitted until 2026-03-07 07:56:49.711148172 +0000 UTC m=+466.620314647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-pdplk" (UID: "fc672282-6000-48f1-bd85-a192c0a352a2") : secret "kube-state-metrics-tls" not found Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.211530 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-metrics-client-ca\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.211623 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/fc672282-6000-48f1-bd85-a192c0a352a2-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.211800 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fc672282-6000-48f1-bd85-a192c0a352a2-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.211879 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.214572 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.214760 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-tls\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.214843 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.231581 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlrw9\" (UniqueName: \"kubernetes.io/projected/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-kube-api-access-hlrw9\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.234601 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhgzm\" (UniqueName: \"kubernetes.io/projected/fc672282-6000-48f1-bd85-a192c0a352a2-kube-api-access-mhgzm\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.326407 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: W0307 07:56:49.345767 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e82c2c8_18b6_4dc7_bafc_5e8cc8381a79.slice/crio-9626203d0dfd1d385c3be8381defd2346b32669758d2c2aa5fe3cf9c73fd5fac WatchSource:0}: Error finding container 9626203d0dfd1d385c3be8381defd2346b32669758d2c2aa5fe3cf9c73fd5fac: Status 404 returned error can't find the container with id 9626203d0dfd1d385c3be8381defd2346b32669758d2c2aa5fe3cf9c73fd5fac Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.458612 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rrnkf" event={"ID":"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79","Type":"ContainerStarted","Data":"9626203d0dfd1d385c3be8381defd2346b32669758d2c2aa5fe3cf9c73fd5fac"} Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.615650 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.623534 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.716825 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.721017 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.881435 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.917041 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.082365 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.089312 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.092561 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.093762 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.094326 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.094889 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.094928 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.095007 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.095175 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-hvqkf" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.095280 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.095443 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.107160 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121062 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121131 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-web-config\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121159 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3201b948-3770-482b-96c1-82c14a5fd9a4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121185 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-config-volume\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121207 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121231 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3201b948-3770-482b-96c1-82c14a5fd9a4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121257 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121277 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3201b948-3770-482b-96c1-82c14a5fd9a4-config-out\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121303 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121324 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3201b948-3770-482b-96c1-82c14a5fd9a4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121341 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3201b948-3770-482b-96c1-82c14a5fd9a4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121360 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rln7d\" (UniqueName: \"kubernetes.io/projected/3201b948-3770-482b-96c1-82c14a5fd9a4-kube-api-access-rln7d\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222291 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222553 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3201b948-3770-482b-96c1-82c14a5fd9a4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222585 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222601 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3201b948-3770-482b-96c1-82c14a5fd9a4-config-out\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222631 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222648 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3201b948-3770-482b-96c1-82c14a5fd9a4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222665 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3201b948-3770-482b-96c1-82c14a5fd9a4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222682 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rln7d\" (UniqueName: \"kubernetes.io/projected/3201b948-3770-482b-96c1-82c14a5fd9a4-kube-api-access-rln7d\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222704 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222750 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-web-config\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222772 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3201b948-3770-482b-96c1-82c14a5fd9a4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222787 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-config-volume\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.224076 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3201b948-3770-482b-96c1-82c14a5fd9a4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: E0307 07:56:50.226155 4761 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 07 07:56:50 crc kubenswrapper[4761]: E0307 07:56:50.226229 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-main-tls podName:3201b948-3770-482b-96c1-82c14a5fd9a4 nodeName:}" failed. No retries permitted until 2026-03-07 07:56:50.726212671 +0000 UTC m=+467.635379146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "3201b948-3770-482b-96c1-82c14a5fd9a4") : secret "alertmanager-main-tls" not found Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.226924 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3201b948-3770-482b-96c1-82c14a5fd9a4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.227108 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3201b948-3770-482b-96c1-82c14a5fd9a4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.227213 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-config-volume\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.233925 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3201b948-3770-482b-96c1-82c14a5fd9a4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.234642 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-web-config\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.235615 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.236447 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3201b948-3770-482b-96c1-82c14a5fd9a4-config-out\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.237293 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.238178 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.259825 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rln7d\" (UniqueName: \"kubernetes.io/projected/3201b948-3770-482b-96c1-82c14a5fd9a4-kube-api-access-rln7d\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.303333 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk"] Mar 07 07:56:50 crc kubenswrapper[4761]: W0307 07:56:50.312937 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc672282_6000_48f1_bd85_a192c0a352a2.slice/crio-f269edb0a8b987b892001f8e5c91b837415313f1688d6ff831c24a057868d549 WatchSource:0}: Error finding container f269edb0a8b987b892001f8e5c91b837415313f1688d6ff831c24a057868d549: Status 404 returned error can't find the container with id f269edb0a8b987b892001f8e5c91b837415313f1688d6ff831c24a057868d549 Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.453785 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2"] Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.463253 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" event={"ID":"fc672282-6000-48f1-bd85-a192c0a352a2","Type":"ContainerStarted","Data":"f269edb0a8b987b892001f8e5c91b837415313f1688d6ff831c24a057868d549"} Mar 07 07:56:50 crc kubenswrapper[4761]: W0307 07:56:50.645394 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef93a4b3_029b_4caf_9b8b_14595f247c7f.slice/crio-74818689b2d465f81c00cc789aa7acf9bf7818a88f49f75a4914a95cf4b5fe8a WatchSource:0}: Error finding container 74818689b2d465f81c00cc789aa7acf9bf7818a88f49f75a4914a95cf4b5fe8a: Status 404 returned error can't find the container with id 74818689b2d465f81c00cc789aa7acf9bf7818a88f49f75a4914a95cf4b5fe8a Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.731150 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.739402 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.762463 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.062428 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-6f4577c6dd-q542m"] Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.067428 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.072179 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.072320 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.072375 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.072402 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-cd94k9vimrkem" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.072378 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-rzd99" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.072557 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.072906 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.078970 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6f4577c6dd-q542m"] Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.139969 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.140117 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-tls\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.140142 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe7ce149-7c15-4b79-a744-d98a58d8407d-metrics-client-ca\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.140214 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.140253 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26zgb\" (UniqueName: \"kubernetes.io/projected/fe7ce149-7c15-4b79-a744-d98a58d8407d-kube-api-access-26zgb\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.140275 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.140323 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.140362 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-grpc-tls\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.241710 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.241823 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-tls\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.241842 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe7ce149-7c15-4b79-a744-d98a58d8407d-metrics-client-ca\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.241928 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.241987 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26zgb\" (UniqueName: \"kubernetes.io/projected/fe7ce149-7c15-4b79-a744-d98a58d8407d-kube-api-access-26zgb\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.242015 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.242166 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.242209 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-grpc-tls\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.245447 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.246230 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe7ce149-7c15-4b79-a744-d98a58d8407d-metrics-client-ca\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.250935 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.251032 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.251050 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-grpc-tls\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.251783 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-tls\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.260441 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.261979 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: W0307 07:56:51.270202 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3201b948_3770_482b_96c1_82c14a5fd9a4.slice/crio-d3e5bf23e9621ce6af2cad9a1635c8fcf6c11a3e6baf32ee4095f1122e0f80fc WatchSource:0}: Error finding container d3e5bf23e9621ce6af2cad9a1635c8fcf6c11a3e6baf32ee4095f1122e0f80fc: Status 404 returned error can't find the container with id d3e5bf23e9621ce6af2cad9a1635c8fcf6c11a3e6baf32ee4095f1122e0f80fc Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.272496 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26zgb\" (UniqueName: \"kubernetes.io/projected/fe7ce149-7c15-4b79-a744-d98a58d8407d-kube-api-access-26zgb\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.406320 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.470486 4761 generic.go:334] "Generic (PLEG): container finished" podID="0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79" containerID="5feaf77d35f3e9fdf3eb009db5d3fd62e89b34c6aa5c4b30afa5ac667e0f1758" exitCode=0 Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.470552 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rrnkf" event={"ID":"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79","Type":"ContainerDied","Data":"5feaf77d35f3e9fdf3eb009db5d3fd62e89b34c6aa5c4b30afa5ac667e0f1758"} Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.473372 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerStarted","Data":"d3e5bf23e9621ce6af2cad9a1635c8fcf6c11a3e6baf32ee4095f1122e0f80fc"} Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.474825 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" event={"ID":"ef93a4b3-029b-4caf-9b8b-14595f247c7f","Type":"ContainerStarted","Data":"e738a79f45a15c566e362e944ddbfe0698b967e2a55a6292b9f0d4133debf1dd"} Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.474850 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" event={"ID":"ef93a4b3-029b-4caf-9b8b-14595f247c7f","Type":"ContainerStarted","Data":"5fe356c9738f5ef32c8cf393961e160af74420846e742c8d24378542aed6a0a8"} Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.474861 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" event={"ID":"ef93a4b3-029b-4caf-9b8b-14595f247c7f","Type":"ContainerStarted","Data":"74818689b2d465f81c00cc789aa7acf9bf7818a88f49f75a4914a95cf4b5fe8a"} Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.897239 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6f4577c6dd-q542m"] Mar 07 07:56:51 crc kubenswrapper[4761]: W0307 07:56:51.901696 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe7ce149_7c15_4b79_a744_d98a58d8407d.slice/crio-91aba7275e561bc14ed57e02d3789fb9faa76b059773a6bca2373685aeae39f6 WatchSource:0}: Error finding container 91aba7275e561bc14ed57e02d3789fb9faa76b059773a6bca2373685aeae39f6: Status 404 returned error can't find the container with id 91aba7275e561bc14ed57e02d3789fb9faa76b059773a6bca2373685aeae39f6 Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.481894 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rrnkf" event={"ID":"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79","Type":"ContainerStarted","Data":"f847002d7e38f9aaf0110c3be752bcbc265068c2ad7f88072380fecff3b99704"} Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.482190 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rrnkf" event={"ID":"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79","Type":"ContainerStarted","Data":"eae43924ad8c8feea907397545ce44344c0e88edd0a9cfaf6484298478222173"} Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.485425 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" event={"ID":"fc672282-6000-48f1-bd85-a192c0a352a2","Type":"ContainerStarted","Data":"397f779ae3b2d5c686022cb0642ded47e0a501563a081b5f6c4dc9732f576ee9"} Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.485455 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" event={"ID":"fc672282-6000-48f1-bd85-a192c0a352a2","Type":"ContainerStarted","Data":"30c525b43285733b1fc64450a5375937b035956bcc1b54d7fbe054802edd1464"} Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.485467 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" event={"ID":"fc672282-6000-48f1-bd85-a192c0a352a2","Type":"ContainerStarted","Data":"9ed39e2ca47f8fbd77426b515b9ca5da764d97145d2ca89886d2fb90e51dc427"} Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.486522 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" event={"ID":"fe7ce149-7c15-4b79-a744-d98a58d8407d","Type":"ContainerStarted","Data":"91aba7275e561bc14ed57e02d3789fb9faa76b059773a6bca2373685aeae39f6"} Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.505900 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-rrnkf" podStartSLOduration=3.122157187 podStartE2EDuration="4.505873902s" podCreationTimestamp="2026-03-07 07:56:48 +0000 UTC" firstStartedPulling="2026-03-07 07:56:49.347670413 +0000 UTC m=+466.256836888" lastFinishedPulling="2026-03-07 07:56:50.731387108 +0000 UTC m=+467.640553603" observedRunningTime="2026-03-07 07:56:52.498814684 +0000 UTC m=+469.407981179" watchObservedRunningTime="2026-03-07 07:56:52.505873902 +0000 UTC m=+469.415040417" Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.520548 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" podStartSLOduration=3.028979954 podStartE2EDuration="4.520524494s" podCreationTimestamp="2026-03-07 07:56:48 +0000 UTC" firstStartedPulling="2026-03-07 07:56:50.314851101 +0000 UTC m=+467.224017576" lastFinishedPulling="2026-03-07 07:56:51.806395641 +0000 UTC m=+468.715562116" observedRunningTime="2026-03-07 07:56:52.512312975 +0000 UTC m=+469.421479450" watchObservedRunningTime="2026-03-07 07:56:52.520524494 +0000 UTC m=+469.429691009" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.496316 4761 generic.go:334] "Generic (PLEG): container finished" podID="3201b948-3770-482b-96c1-82c14a5fd9a4" containerID="b168f1a290d9b4211ec8397e0111051e2e2a5334a918767ce895d18fc2b4e687" exitCode=0 Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.496376 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerDied","Data":"b168f1a290d9b4211ec8397e0111051e2e2a5334a918767ce895d18fc2b4e687"} Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.498689 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" event={"ID":"ef93a4b3-029b-4caf-9b8b-14595f247c7f","Type":"ContainerStarted","Data":"fd24d3fb4e9cc5ddecb2ef702dea54269457b6c0082ebd03533b2af172d05922"} Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.561522 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" podStartSLOduration=3.592015439 podStartE2EDuration="5.561501867s" podCreationTimestamp="2026-03-07 07:56:48 +0000 UTC" firstStartedPulling="2026-03-07 07:56:50.943849129 +0000 UTC m=+467.853015604" lastFinishedPulling="2026-03-07 07:56:52.913335557 +0000 UTC m=+469.822502032" observedRunningTime="2026-03-07 07:56:53.553078741 +0000 UTC m=+470.462245276" watchObservedRunningTime="2026-03-07 07:56:53.561501867 +0000 UTC m=+470.470668342" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.832962 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-559c944c6f-b9jgm"] Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.833808 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.844939 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-559c944c6f-b9jgm"] Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.881909 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-service-ca\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.882032 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qllg6\" (UniqueName: \"kubernetes.io/projected/b38a2995-784e-4f3b-8a16-0523c6608976-kube-api-access-qllg6\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.882062 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-console-config\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.882136 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-serving-cert\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.882161 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-trusted-ca-bundle\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.882186 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-oauth-serving-cert\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.882214 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-oauth-config\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.983709 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-serving-cert\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.983765 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-trusted-ca-bundle\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.983783 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-oauth-serving-cert\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.983798 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-oauth-config\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.983820 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-service-ca\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.983868 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qllg6\" (UniqueName: \"kubernetes.io/projected/b38a2995-784e-4f3b-8a16-0523c6608976-kube-api-access-qllg6\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.983887 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-console-config\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.984749 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-service-ca\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.984837 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-console-config\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.985147 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-trusted-ca-bundle\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.985817 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-oauth-serving-cert\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.988250 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-oauth-config\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.988864 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-serving-cert\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.003093 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qllg6\" (UniqueName: \"kubernetes.io/projected/b38a2995-784e-4f3b-8a16-0523c6608976-kube-api-access-qllg6\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.151221 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.370948 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-854cd44758-k9qwx"] Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.371906 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.375921 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.376011 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.377008 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-bxdq4" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.377119 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.377203 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-4tds9n4pmtcub" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.377273 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.384552 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-854cd44758-k9qwx"] Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.389515 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-secret-metrics-server-tls\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.389628 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzbnm\" (UniqueName: \"kubernetes.io/projected/4d4f9001-7d67-467b-8028-ec6162564829-kube-api-access-kzbnm\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.389739 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4d4f9001-7d67-467b-8028-ec6162564829-metrics-server-audit-profiles\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.389844 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4d4f9001-7d67-467b-8028-ec6162564829-audit-log\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.389889 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f9001-7d67-467b-8028-ec6162564829-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.389919 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-client-ca-bundle\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.389980 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-secret-metrics-client-certs\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.491553 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-secret-metrics-server-tls\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.491862 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzbnm\" (UniqueName: \"kubernetes.io/projected/4d4f9001-7d67-467b-8028-ec6162564829-kube-api-access-kzbnm\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.491899 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4d4f9001-7d67-467b-8028-ec6162564829-metrics-server-audit-profiles\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.491946 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4d4f9001-7d67-467b-8028-ec6162564829-audit-log\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.491986 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f9001-7d67-467b-8028-ec6162564829-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.492011 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-client-ca-bundle\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.492042 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-secret-metrics-client-certs\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.493101 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f9001-7d67-467b-8028-ec6162564829-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.493396 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4d4f9001-7d67-467b-8028-ec6162564829-audit-log\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.494173 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4d4f9001-7d67-467b-8028-ec6162564829-metrics-server-audit-profiles\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.498081 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-secret-metrics-client-certs\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.498272 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-secret-metrics-server-tls\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.508537 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-client-ca-bundle\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.519631 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzbnm\" (UniqueName: \"kubernetes.io/projected/4d4f9001-7d67-467b-8028-ec6162564829-kube-api-access-kzbnm\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.698104 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.701707 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-559c944c6f-b9jgm"] Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.782891 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r"] Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.783759 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.786190 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.786369 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.793468 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r"] Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.899811 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/08721f50-8882-42b0-9370-cbe4508753d3-monitoring-plugin-cert\") pod \"monitoring-plugin-67c8dd59f5-sbh4r\" (UID: \"08721f50-8882-42b0-9370-cbe4508753d3\") " pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.952171 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.001195 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/08721f50-8882-42b0-9370-cbe4508753d3-monitoring-plugin-cert\") pod \"monitoring-plugin-67c8dd59f5-sbh4r\" (UID: \"08721f50-8882-42b0-9370-cbe4508753d3\") " pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.008707 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/08721f50-8882-42b0-9370-cbe4508753d3-monitoring-plugin-cert\") pod \"monitoring-plugin-67c8dd59f5-sbh4r\" (UID: \"08721f50-8882-42b0-9370-cbe4508753d3\") " pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.114689 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.169555 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-854cd44758-k9qwx"] Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.332117 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.333853 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.338781 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.338924 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.339038 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-fhah9r8o0ud4q" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.339075 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.339201 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-ptj8w" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.339311 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.339687 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.339791 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.342990 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.343529 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.343692 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.344825 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.347925 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.357467 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.405854 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.405892 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.405912 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-config\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.405936 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.405952 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-config-out\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.405969 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.405987 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-web-config\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406008 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2n6f\" (UniqueName: \"kubernetes.io/projected/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-kube-api-access-j2n6f\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406023 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406037 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406058 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406082 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406139 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406155 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406178 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406198 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406219 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406238 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507043 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507323 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-config\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507341 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507361 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507379 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-config-out\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507395 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507413 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-web-config\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507432 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2n6f\" (UniqueName: \"kubernetes.io/projected/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-kube-api-access-j2n6f\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507448 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507461 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507482 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507499 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507546 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507563 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507583 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507600 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507619 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507638 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.508020 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.508311 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.510055 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.510639 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.512095 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.512495 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.515170 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-config\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.517045 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-config-out\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.517341 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.518269 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.518330 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-web-config\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.526288 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.528119 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.528803 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.531684 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.533450 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.534244 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.534848 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2n6f\" (UniqueName: \"kubernetes.io/projected/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-kube-api-access-j2n6f\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.536296 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559c944c6f-b9jgm" event={"ID":"b38a2995-784e-4f3b-8a16-0523c6608976","Type":"ContainerStarted","Data":"aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119"} Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.536386 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559c944c6f-b9jgm" event={"ID":"b38a2995-784e-4f3b-8a16-0523c6608976","Type":"ContainerStarted","Data":"2b2c2fbdbea4d9ece5b1cac1e7b8c486e2c9f72129148569cb7377d6f110d9f7"} Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.540117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" event={"ID":"4d4f9001-7d67-467b-8028-ec6162564829","Type":"ContainerStarted","Data":"1d1c7645dad4a787d2ea0d3c6587490c91d6cd5b88aeea18be125b7b151153b2"} Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.545073 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" event={"ID":"fe7ce149-7c15-4b79-a744-d98a58d8407d","Type":"ContainerStarted","Data":"d7e670f03511eeef62b7a5418cea7db99b70f3215820e4557d99305a20706bfc"} Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.545140 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" event={"ID":"fe7ce149-7c15-4b79-a744-d98a58d8407d","Type":"ContainerStarted","Data":"3a014ec83c53feb556fbfa49e6cb032b29bb4efe029f4d7172ae699b9e38a995"} Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.545149 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" event={"ID":"fe7ce149-7c15-4b79-a744-d98a58d8407d","Type":"ContainerStarted","Data":"bb74b68aba6bb00c368e318749d7cc4763af50273b322d2349dbd4b19d276ce1"} Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.552858 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r"] Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.566191 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-559c944c6f-b9jgm" podStartSLOduration=2.5661695939999998 podStartE2EDuration="2.566169594s" podCreationTimestamp="2026-03-07 07:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:56:55.559844025 +0000 UTC m=+472.469010500" watchObservedRunningTime="2026-03-07 07:56:55.566169594 +0000 UTC m=+472.475336079" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.664696 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:56 crc kubenswrapper[4761]: I0307 07:56:56.526991 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 07 07:56:56 crc kubenswrapper[4761]: I0307 07:56:56.553354 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" event={"ID":"08721f50-8882-42b0-9370-cbe4508753d3","Type":"ContainerStarted","Data":"b0fb2bc55e3cb14cc1fec0dd3fced79ceab8d031e139053775a628a710b8b7b6"} Mar 07 07:56:56 crc kubenswrapper[4761]: I0307 07:56:56.556228 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerStarted","Data":"f814d07350b0904d7e57539b61093e8d25ea5d5a8f74a5a3daf80a8741b4549f"} Mar 07 07:56:56 crc kubenswrapper[4761]: I0307 07:56:56.556266 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerStarted","Data":"9999077c5febdc133077c8ce7a4112b35ee7345e0aa29802bbbd2660e57e2366"} Mar 07 07:56:56 crc kubenswrapper[4761]: W0307 07:56:56.633847 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59e88cc8_08cb_4709_8e8b_5a7f3bf4ba4c.slice/crio-9ab4e3d26ff47bfb87c1432c53911efa7fa8248e14333f746a5a5559d437386b WatchSource:0}: Error finding container 9ab4e3d26ff47bfb87c1432c53911efa7fa8248e14333f746a5a5559d437386b: Status 404 returned error can't find the container with id 9ab4e3d26ff47bfb87c1432c53911efa7fa8248e14333f746a5a5559d437386b Mar 07 07:56:57 crc kubenswrapper[4761]: I0307 07:56:57.566549 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" event={"ID":"fe7ce149-7c15-4b79-a744-d98a58d8407d","Type":"ContainerStarted","Data":"5cf652fd300a7dcad1c792bc7bca905c890450bff19ba39c0d92d05a17fe5c31"} Mar 07 07:56:57 crc kubenswrapper[4761]: I0307 07:56:57.568066 4761 generic.go:334] "Generic (PLEG): container finished" podID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerID="b0086581fe6209015a9dd84ad597cfd3c1722562dd596ec5f13e9505104599cc" exitCode=0 Mar 07 07:56:57 crc kubenswrapper[4761]: I0307 07:56:57.568154 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerDied","Data":"b0086581fe6209015a9dd84ad597cfd3c1722562dd596ec5f13e9505104599cc"} Mar 07 07:56:57 crc kubenswrapper[4761]: I0307 07:56:57.568210 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerStarted","Data":"9ab4e3d26ff47bfb87c1432c53911efa7fa8248e14333f746a5a5559d437386b"} Mar 07 07:56:57 crc kubenswrapper[4761]: I0307 07:56:57.577072 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerStarted","Data":"47ce75bb7db0fdbdf0af1646173d789447d90feb4eead84921adfc6db151a7ef"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.583798 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" event={"ID":"08721f50-8882-42b0-9370-cbe4508753d3","Type":"ContainerStarted","Data":"86341a1a23199efc2f9a92427cbee84a0b4538947777056ba71ed97e124a44f6"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.584096 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.588642 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerStarted","Data":"1f071c75c2a058ab5714cf8f6ec69894fb11b6151745d6bb9b03eb9874d45e2f"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.588683 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerStarted","Data":"2d9b79c17ebac3095d2a535839e917db6f014743f7494590367a258658f92ab4"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.588693 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerStarted","Data":"3aa655c4f2e049d042a9f582d4e3df0ac021e48f7252978bc0d94ee27f28d1bb"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.591122 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.592530 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" event={"ID":"fe7ce149-7c15-4b79-a744-d98a58d8407d","Type":"ContainerStarted","Data":"282829e32478ef6b853b733eaafa5ad027f9de850ca15e8ee3683d4b6ea69d2b"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.592581 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" event={"ID":"fe7ce149-7c15-4b79-a744-d98a58d8407d","Type":"ContainerStarted","Data":"31bbc9793afb0f780de9bd21971021f90bd68fd20954edd322954f244e483984"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.592708 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.594134 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" event={"ID":"4d4f9001-7d67-467b-8028-ec6162564829","Type":"ContainerStarted","Data":"edc3b91ba9c93fdc8b8f4ab8405a9cde976a43eb0938c38a97b875a93e760b4c"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.606170 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" podStartSLOduration=3.121778636 podStartE2EDuration="4.606151553s" podCreationTimestamp="2026-03-07 07:56:54 +0000 UTC" firstStartedPulling="2026-03-07 07:56:56.085661644 +0000 UTC m=+472.994828119" lastFinishedPulling="2026-03-07 07:56:57.570034561 +0000 UTC m=+474.479201036" observedRunningTime="2026-03-07 07:56:58.603843052 +0000 UTC m=+475.513009557" watchObservedRunningTime="2026-03-07 07:56:58.606151553 +0000 UTC m=+475.515318038" Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.630563 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.747530549 podStartE2EDuration="8.630546265s" podCreationTimestamp="2026-03-07 07:56:50 +0000 UTC" firstStartedPulling="2026-03-07 07:56:51.274769706 +0000 UTC m=+468.183936181" lastFinishedPulling="2026-03-07 07:56:56.157785422 +0000 UTC m=+473.066951897" observedRunningTime="2026-03-07 07:56:58.628513061 +0000 UTC m=+475.537679586" watchObservedRunningTime="2026-03-07 07:56:58.630546265 +0000 UTC m=+475.539712740" Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.689954 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" podStartSLOduration=2.334081455 podStartE2EDuration="4.689935583s" podCreationTimestamp="2026-03-07 07:56:54 +0000 UTC" firstStartedPulling="2026-03-07 07:56:55.183107712 +0000 UTC m=+472.092274187" lastFinishedPulling="2026-03-07 07:56:57.53896184 +0000 UTC m=+474.448128315" observedRunningTime="2026-03-07 07:56:58.684911079 +0000 UTC m=+475.594077564" watchObservedRunningTime="2026-03-07 07:56:58.689935583 +0000 UTC m=+475.599102058" Mar 07 07:56:59 crc kubenswrapper[4761]: I0307 07:56:59.611425 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:59 crc kubenswrapper[4761]: I0307 07:56:59.650031 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" podStartSLOduration=3.878360545 podStartE2EDuration="8.650012123s" podCreationTimestamp="2026-03-07 07:56:51 +0000 UTC" firstStartedPulling="2026-03-07 07:56:51.904175235 +0000 UTC m=+468.813341740" lastFinishedPulling="2026-03-07 07:56:56.675826833 +0000 UTC m=+473.584993318" observedRunningTime="2026-03-07 07:56:58.711390717 +0000 UTC m=+475.620557212" watchObservedRunningTime="2026-03-07 07:56:59.650012123 +0000 UTC m=+476.559178598" Mar 07 07:57:01 crc kubenswrapper[4761]: I0307 07:57:01.626993 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerStarted","Data":"40875af43085a90ba5369e3f4d70d09e4eaa1ab66a58e3e3dfbd155fc94928d5"} Mar 07 07:57:01 crc kubenswrapper[4761]: I0307 07:57:01.627406 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerStarted","Data":"25f5b6d1d6f5b674e3e1d1cb0c8d8e9d6f17d9b5e3228c5745930b349371f409"} Mar 07 07:57:01 crc kubenswrapper[4761]: I0307 07:57:01.627428 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerStarted","Data":"b61d156b0b8979e8f0e70668e3d0cc69bd89cc0026f76ee9549555e442ad5654"} Mar 07 07:57:01 crc kubenswrapper[4761]: I0307 07:57:01.627447 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerStarted","Data":"d0e52fb69fe2348a6f9bb7bc1819a8d59fdd268eff6424c841a4dedce8e5f09e"} Mar 07 07:57:01 crc kubenswrapper[4761]: I0307 07:57:01.627463 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerStarted","Data":"67396b258a241fd21319028f01f08db5072692e9e15aff0da62e450dd25977e7"} Mar 07 07:57:01 crc kubenswrapper[4761]: I0307 07:57:01.627479 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerStarted","Data":"a0a7de6d2e4d8b8b5ef36b0c91001f9b2740fc5723ca24636618de5ffe04b1da"} Mar 07 07:57:01 crc kubenswrapper[4761]: I0307 07:57:01.667261 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.671264393 podStartE2EDuration="6.667237606s" podCreationTimestamp="2026-03-07 07:56:55 +0000 UTC" firstStartedPulling="2026-03-07 07:56:57.570578935 +0000 UTC m=+474.479745410" lastFinishedPulling="2026-03-07 07:57:00.566552148 +0000 UTC m=+477.475718623" observedRunningTime="2026-03-07 07:57:01.666082245 +0000 UTC m=+478.575248790" watchObservedRunningTime="2026-03-07 07:57:01.667237606 +0000 UTC m=+478.576404121" Mar 07 07:57:04 crc kubenswrapper[4761]: I0307 07:57:04.152205 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:57:04 crc kubenswrapper[4761]: I0307 07:57:04.152518 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:57:04 crc kubenswrapper[4761]: I0307 07:57:04.161944 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:57:04 crc kubenswrapper[4761]: I0307 07:57:04.659878 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:57:04 crc kubenswrapper[4761]: I0307 07:57:04.742124 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fsrlc"] Mar 07 07:57:05 crc kubenswrapper[4761]: I0307 07:57:05.665085 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:57:13 crc kubenswrapper[4761]: I0307 07:57:13.768400 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:57:13 crc kubenswrapper[4761]: I0307 07:57:13.769087 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:57:14 crc kubenswrapper[4761]: I0307 07:57:14.699026 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:57:14 crc kubenswrapper[4761]: I0307 07:57:14.699118 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:57:29 crc kubenswrapper[4761]: I0307 07:57:29.808879 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-fsrlc" podUID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" containerName="console" containerID="cri-o://2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735" gracePeriod=15 Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.235634 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fsrlc_7b1e7bf9-5dc9-4326-b63d-426a716351bc/console/0.log" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.235884 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.334393 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-service-ca\") pod \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.334458 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-oauth-serving-cert\") pod \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.334505 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-serving-cert\") pod \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.334531 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-oauth-config\") pod \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.334578 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-trusted-ca-bundle\") pod \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.334604 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdfpv\" (UniqueName: \"kubernetes.io/projected/7b1e7bf9-5dc9-4326-b63d-426a716351bc-kube-api-access-wdfpv\") pod \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.334637 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-config\") pod \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.335287 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-config" (OuterVolumeSpecName: "console-config") pod "7b1e7bf9-5dc9-4326-b63d-426a716351bc" (UID: "7b1e7bf9-5dc9-4326-b63d-426a716351bc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.335305 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7b1e7bf9-5dc9-4326-b63d-426a716351bc" (UID: "7b1e7bf9-5dc9-4326-b63d-426a716351bc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.335281 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7b1e7bf9-5dc9-4326-b63d-426a716351bc" (UID: "7b1e7bf9-5dc9-4326-b63d-426a716351bc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.335356 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-service-ca" (OuterVolumeSpecName: "service-ca") pod "7b1e7bf9-5dc9-4326-b63d-426a716351bc" (UID: "7b1e7bf9-5dc9-4326-b63d-426a716351bc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.339731 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7b1e7bf9-5dc9-4326-b63d-426a716351bc" (UID: "7b1e7bf9-5dc9-4326-b63d-426a716351bc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.340112 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7b1e7bf9-5dc9-4326-b63d-426a716351bc" (UID: "7b1e7bf9-5dc9-4326-b63d-426a716351bc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.341032 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1e7bf9-5dc9-4326-b63d-426a716351bc-kube-api-access-wdfpv" (OuterVolumeSpecName: "kube-api-access-wdfpv") pod "7b1e7bf9-5dc9-4326-b63d-426a716351bc" (UID: "7b1e7bf9-5dc9-4326-b63d-426a716351bc"). InnerVolumeSpecName "kube-api-access-wdfpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.435541 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.435570 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdfpv\" (UniqueName: \"kubernetes.io/projected/7b1e7bf9-5dc9-4326-b63d-426a716351bc-kube-api-access-wdfpv\") on node \"crc\" DevicePath \"\"" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.435582 4761 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.435594 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.435603 4761 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.435611 4761 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.435619 4761 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.862058 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fsrlc_7b1e7bf9-5dc9-4326-b63d-426a716351bc/console/0.log" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.862134 4761 generic.go:334] "Generic (PLEG): container finished" podID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" containerID="2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735" exitCode=2 Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.862177 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fsrlc" event={"ID":"7b1e7bf9-5dc9-4326-b63d-426a716351bc","Type":"ContainerDied","Data":"2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735"} Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.862226 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fsrlc" event={"ID":"7b1e7bf9-5dc9-4326-b63d-426a716351bc","Type":"ContainerDied","Data":"5d8c56f6ff97a80ea16e87c27e25c3984cdb01c579b7c368c7a0e106d6b80361"} Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.862254 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.862261 4761 scope.go:117] "RemoveContainer" containerID="2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.895869 4761 scope.go:117] "RemoveContainer" containerID="2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735" Mar 07 07:57:30 crc kubenswrapper[4761]: E0307 07:57:30.896826 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735\": container with ID starting with 2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735 not found: ID does not exist" containerID="2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.896882 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735"} err="failed to get container status \"2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735\": rpc error: code = NotFound desc = could not find container \"2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735\": container with ID starting with 2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735 not found: ID does not exist" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.920515 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fsrlc"] Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.928919 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-fsrlc"] Mar 07 07:57:31 crc kubenswrapper[4761]: I0307 07:57:31.717400 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" path="/var/lib/kubelet/pods/7b1e7bf9-5dc9-4326-b63d-426a716351bc/volumes" Mar 07 07:57:34 crc kubenswrapper[4761]: I0307 07:57:34.705414 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:57:34 crc kubenswrapper[4761]: I0307 07:57:34.714017 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.768100 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.768702 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.768786 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.769765 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99999bd284e69fd9faa6103a00d03a466d499b9bac79905f9b3132ce0f479790"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.769894 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://99999bd284e69fd9faa6103a00d03a466d499b9bac79905f9b3132ce0f479790" gracePeriod=600 Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.954213 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="99999bd284e69fd9faa6103a00d03a466d499b9bac79905f9b3132ce0f479790" exitCode=0 Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.954317 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"99999bd284e69fd9faa6103a00d03a466d499b9bac79905f9b3132ce0f479790"} Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.954587 4761 scope.go:117] "RemoveContainer" containerID="32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897" Mar 07 07:57:44 crc kubenswrapper[4761]: I0307 07:57:44.966964 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"4e56717fa60308e8f622ec33776708c4b00d9ccd7a8ad0a18a994be6b41d32a1"} Mar 07 07:57:55 crc kubenswrapper[4761]: I0307 07:57:55.665752 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:57:55 crc kubenswrapper[4761]: I0307 07:57:55.696569 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:57:56 crc kubenswrapper[4761]: I0307 07:57:56.263795 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.130696 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547838-mpzrk"] Mar 07 07:58:00 crc kubenswrapper[4761]: E0307 07:58:00.131283 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" containerName="console" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.131298 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" containerName="console" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.132564 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" containerName="console" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.133138 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.135103 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.135275 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.135414 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.145611 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547838-mpzrk"] Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.280511 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mv85\" (UniqueName: \"kubernetes.io/projected/874f3622-b314-4b99-b663-e7b63dad53f6-kube-api-access-9mv85\") pod \"auto-csr-approver-29547838-mpzrk\" (UID: \"874f3622-b314-4b99-b663-e7b63dad53f6\") " pod="openshift-infra/auto-csr-approver-29547838-mpzrk" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.382315 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mv85\" (UniqueName: \"kubernetes.io/projected/874f3622-b314-4b99-b663-e7b63dad53f6-kube-api-access-9mv85\") pod \"auto-csr-approver-29547838-mpzrk\" (UID: \"874f3622-b314-4b99-b663-e7b63dad53f6\") " pod="openshift-infra/auto-csr-approver-29547838-mpzrk" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.405313 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mv85\" (UniqueName: \"kubernetes.io/projected/874f3622-b314-4b99-b663-e7b63dad53f6-kube-api-access-9mv85\") pod \"auto-csr-approver-29547838-mpzrk\" (UID: \"874f3622-b314-4b99-b663-e7b63dad53f6\") " pod="openshift-infra/auto-csr-approver-29547838-mpzrk" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.457700 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.893503 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547838-mpzrk"] Mar 07 07:58:01 crc kubenswrapper[4761]: I0307 07:58:01.261417 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" event={"ID":"874f3622-b314-4b99-b663-e7b63dad53f6","Type":"ContainerStarted","Data":"7c9c85c2da1ea51772b586c5d5f2a90623ac04dac6d870b06c0b144e40c119ea"} Mar 07 07:58:02 crc kubenswrapper[4761]: I0307 07:58:02.267837 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" event={"ID":"874f3622-b314-4b99-b663-e7b63dad53f6","Type":"ContainerStarted","Data":"7a2a5869acc50549f2b35140d3c5e4a51520531a922a419e98ea8062338830e2"} Mar 07 07:58:02 crc kubenswrapper[4761]: I0307 07:58:02.280743 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" podStartSLOduration=1.272913252 podStartE2EDuration="2.280703423s" podCreationTimestamp="2026-03-07 07:58:00 +0000 UTC" firstStartedPulling="2026-03-07 07:58:00.900319762 +0000 UTC m=+537.809486277" lastFinishedPulling="2026-03-07 07:58:01.908109943 +0000 UTC m=+538.817276448" observedRunningTime="2026-03-07 07:58:02.278708769 +0000 UTC m=+539.187875274" watchObservedRunningTime="2026-03-07 07:58:02.280703423 +0000 UTC m=+539.189869898" Mar 07 07:58:03 crc kubenswrapper[4761]: I0307 07:58:03.276413 4761 generic.go:334] "Generic (PLEG): container finished" podID="874f3622-b314-4b99-b663-e7b63dad53f6" containerID="7a2a5869acc50549f2b35140d3c5e4a51520531a922a419e98ea8062338830e2" exitCode=0 Mar 07 07:58:03 crc kubenswrapper[4761]: I0307 07:58:03.276531 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" event={"ID":"874f3622-b314-4b99-b663-e7b63dad53f6","Type":"ContainerDied","Data":"7a2a5869acc50549f2b35140d3c5e4a51520531a922a419e98ea8062338830e2"} Mar 07 07:58:04 crc kubenswrapper[4761]: I0307 07:58:04.576819 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" Mar 07 07:58:04 crc kubenswrapper[4761]: I0307 07:58:04.649145 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mv85\" (UniqueName: \"kubernetes.io/projected/874f3622-b314-4b99-b663-e7b63dad53f6-kube-api-access-9mv85\") pod \"874f3622-b314-4b99-b663-e7b63dad53f6\" (UID: \"874f3622-b314-4b99-b663-e7b63dad53f6\") " Mar 07 07:58:04 crc kubenswrapper[4761]: I0307 07:58:04.658423 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874f3622-b314-4b99-b663-e7b63dad53f6-kube-api-access-9mv85" (OuterVolumeSpecName: "kube-api-access-9mv85") pod "874f3622-b314-4b99-b663-e7b63dad53f6" (UID: "874f3622-b314-4b99-b663-e7b63dad53f6"). InnerVolumeSpecName "kube-api-access-9mv85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:58:04 crc kubenswrapper[4761]: I0307 07:58:04.750933 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mv85\" (UniqueName: \"kubernetes.io/projected/874f3622-b314-4b99-b663-e7b63dad53f6-kube-api-access-9mv85\") on node \"crc\" DevicePath \"\"" Mar 07 07:58:05 crc kubenswrapper[4761]: I0307 07:58:05.295928 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" event={"ID":"874f3622-b314-4b99-b663-e7b63dad53f6","Type":"ContainerDied","Data":"7c9c85c2da1ea51772b586c5d5f2a90623ac04dac6d870b06c0b144e40c119ea"} Mar 07 07:58:05 crc kubenswrapper[4761]: I0307 07:58:05.296252 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c9c85c2da1ea51772b586c5d5f2a90623ac04dac6d870b06c0b144e40c119ea" Mar 07 07:58:05 crc kubenswrapper[4761]: I0307 07:58:05.295980 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" Mar 07 07:58:05 crc kubenswrapper[4761]: I0307 07:58:05.333618 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547832-2fpg8"] Mar 07 07:58:05 crc kubenswrapper[4761]: I0307 07:58:05.340685 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547832-2fpg8"] Mar 07 07:58:05 crc kubenswrapper[4761]: I0307 07:58:05.726992 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="083b3718-3e45-40ca-8adf-5f417eeda74d" path="/var/lib/kubelet/pods/083b3718-3e45-40ca-8adf-5f417eeda74d/volumes" Mar 07 07:58:41 crc kubenswrapper[4761]: I0307 07:58:41.362023 4761 scope.go:117] "RemoveContainer" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" Mar 07 07:59:04 crc kubenswrapper[4761]: I0307 07:59:04.992473 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-57ff97798b-fglrq"] Mar 07 07:59:04 crc kubenswrapper[4761]: E0307 07:59:04.993375 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874f3622-b314-4b99-b663-e7b63dad53f6" containerName="oc" Mar 07 07:59:04 crc kubenswrapper[4761]: I0307 07:59:04.993390 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="874f3622-b314-4b99-b663-e7b63dad53f6" containerName="oc" Mar 07 07:59:04 crc kubenswrapper[4761]: I0307 07:59:04.993540 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="874f3622-b314-4b99-b663-e7b63dad53f6" containerName="oc" Mar 07 07:59:04 crc kubenswrapper[4761]: I0307 07:59:04.994028 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.021991 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57ff97798b-fglrq"] Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.112106 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-console-config\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.112170 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwkvp\" (UniqueName: \"kubernetes.io/projected/0c90daf5-8fd7-4370-81d3-593760b7886f-kube-api-access-rwkvp\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.112221 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-oauth-serving-cert\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.112281 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-oauth-config\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.112330 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-trusted-ca-bundle\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.112365 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-serving-cert\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.112389 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-service-ca\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.213100 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-oauth-config\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.213147 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-trusted-ca-bundle\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.213167 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-serving-cert\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.213185 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-service-ca\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.213222 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-console-config\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.213265 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwkvp\" (UniqueName: \"kubernetes.io/projected/0c90daf5-8fd7-4370-81d3-593760b7886f-kube-api-access-rwkvp\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.213302 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-oauth-serving-cert\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.214426 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-oauth-serving-cert\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.215018 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-trusted-ca-bundle\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.215101 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-service-ca\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.215455 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-console-config\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.220593 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-oauth-config\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.223401 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-serving-cert\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.231038 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwkvp\" (UniqueName: \"kubernetes.io/projected/0c90daf5-8fd7-4370-81d3-593760b7886f-kube-api-access-rwkvp\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.316826 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.566430 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57ff97798b-fglrq"] Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.755236 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57ff97798b-fglrq" event={"ID":"0c90daf5-8fd7-4370-81d3-593760b7886f","Type":"ContainerStarted","Data":"2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0"} Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.755671 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57ff97798b-fglrq" event={"ID":"0c90daf5-8fd7-4370-81d3-593760b7886f","Type":"ContainerStarted","Data":"efa8419c67761e6f44973550c8c4891d02eea844e1a31bf44687eb18787132b7"} Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.773950 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-57ff97798b-fglrq" podStartSLOduration=1.7738870979999999 podStartE2EDuration="1.773887098s" podCreationTimestamp="2026-03-07 07:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:59:05.770152321 +0000 UTC m=+602.679318956" watchObservedRunningTime="2026-03-07 07:59:05.773887098 +0000 UTC m=+602.683053573" Mar 07 07:59:15 crc kubenswrapper[4761]: I0307 07:59:15.317795 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:15 crc kubenswrapper[4761]: I0307 07:59:15.319046 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:15 crc kubenswrapper[4761]: I0307 07:59:15.326901 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:15 crc kubenswrapper[4761]: I0307 07:59:15.835606 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:15 crc kubenswrapper[4761]: I0307 07:59:15.901965 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-559c944c6f-b9jgm"] Mar 07 07:59:40 crc kubenswrapper[4761]: I0307 07:59:40.952133 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-559c944c6f-b9jgm" podUID="b38a2995-784e-4f3b-8a16-0523c6608976" containerName="console" containerID="cri-o://aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119" gracePeriod=15 Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.410782 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-559c944c6f-b9jgm_b38a2995-784e-4f3b-8a16-0523c6608976/console/0.log" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.410858 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.417800 4761 scope.go:117] "RemoveContainer" containerID="654d770009a0f10f99664fd8e046dfa38b717254a33124a41073359820cb504e" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.497145 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-oauth-serving-cert\") pod \"b38a2995-784e-4f3b-8a16-0523c6608976\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.497590 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-serving-cert\") pod \"b38a2995-784e-4f3b-8a16-0523c6608976\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.497629 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-trusted-ca-bundle\") pod \"b38a2995-784e-4f3b-8a16-0523c6608976\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.497667 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-service-ca\") pod \"b38a2995-784e-4f3b-8a16-0523c6608976\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.497751 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-console-config\") pod \"b38a2995-784e-4f3b-8a16-0523c6608976\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.497880 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-oauth-config\") pod \"b38a2995-784e-4f3b-8a16-0523c6608976\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.497948 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qllg6\" (UniqueName: \"kubernetes.io/projected/b38a2995-784e-4f3b-8a16-0523c6608976-kube-api-access-qllg6\") pod \"b38a2995-784e-4f3b-8a16-0523c6608976\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.498969 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b38a2995-784e-4f3b-8a16-0523c6608976" (UID: "b38a2995-784e-4f3b-8a16-0523c6608976"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.499809 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b38a2995-784e-4f3b-8a16-0523c6608976" (UID: "b38a2995-784e-4f3b-8a16-0523c6608976"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.500397 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-console-config" (OuterVolumeSpecName: "console-config") pod "b38a2995-784e-4f3b-8a16-0523c6608976" (UID: "b38a2995-784e-4f3b-8a16-0523c6608976"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.501233 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-service-ca" (OuterVolumeSpecName: "service-ca") pod "b38a2995-784e-4f3b-8a16-0523c6608976" (UID: "b38a2995-784e-4f3b-8a16-0523c6608976"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.505139 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b38a2995-784e-4f3b-8a16-0523c6608976" (UID: "b38a2995-784e-4f3b-8a16-0523c6608976"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.505690 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b38a2995-784e-4f3b-8a16-0523c6608976" (UID: "b38a2995-784e-4f3b-8a16-0523c6608976"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.506037 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38a2995-784e-4f3b-8a16-0523c6608976-kube-api-access-qllg6" (OuterVolumeSpecName: "kube-api-access-qllg6") pod "b38a2995-784e-4f3b-8a16-0523c6608976" (UID: "b38a2995-784e-4f3b-8a16-0523c6608976"). InnerVolumeSpecName "kube-api-access-qllg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.599947 4761 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.600004 4761 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.600027 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.600047 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.600067 4761 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-console-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.600084 4761 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.600102 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qllg6\" (UniqueName: \"kubernetes.io/projected/b38a2995-784e-4f3b-8a16-0523c6608976-kube-api-access-qllg6\") on node \"crc\" DevicePath \"\"" Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.047785 4761 generic.go:334] "Generic (PLEG): container finished" podID="b38a2995-784e-4f3b-8a16-0523c6608976" containerID="aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119" exitCode=2 Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.047857 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559c944c6f-b9jgm" event={"ID":"b38a2995-784e-4f3b-8a16-0523c6608976","Type":"ContainerDied","Data":"aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119"} Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.047908 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559c944c6f-b9jgm" event={"ID":"b38a2995-784e-4f3b-8a16-0523c6608976","Type":"ContainerDied","Data":"2b2c2fbdbea4d9ece5b1cac1e7b8c486e2c9f72129148569cb7377d6f110d9f7"} Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.047977 4761 scope.go:117] "RemoveContainer" containerID="aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119" Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.048968 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.081524 4761 scope.go:117] "RemoveContainer" containerID="aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119" Mar 07 07:59:42 crc kubenswrapper[4761]: E0307 07:59:42.082109 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119\": container with ID starting with aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119 not found: ID does not exist" containerID="aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119" Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.082164 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119"} err="failed to get container status \"aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119\": rpc error: code = NotFound desc = could not find container \"aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119\": container with ID starting with aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119 not found: ID does not exist" Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.096081 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-559c944c6f-b9jgm"] Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.103844 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-559c944c6f-b9jgm"] Mar 07 07:59:43 crc kubenswrapper[4761]: I0307 07:59:43.718297 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b38a2995-784e-4f3b-8a16-0523c6608976" path="/var/lib/kubelet/pods/b38a2995-784e-4f3b-8a16-0523c6608976/volumes" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.151916 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv"] Mar 07 08:00:00 crc kubenswrapper[4761]: E0307 08:00:00.153261 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38a2995-784e-4f3b-8a16-0523c6608976" containerName="console" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.153297 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38a2995-784e-4f3b-8a16-0523c6608976" containerName="console" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.153549 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b38a2995-784e-4f3b-8a16-0523c6608976" containerName="console" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.154463 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.156513 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.157260 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.158330 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547840-c7fc5"] Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.159492 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547840-c7fc5" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.161510 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.165093 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.165442 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.165974 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv"] Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.172760 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547840-c7fc5"] Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.228865 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4ef27e8-2f95-4794-a265-433ecf982772-config-volume\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.228939 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpsmp\" (UniqueName: \"kubernetes.io/projected/b4ef27e8-2f95-4794-a265-433ecf982772-kube-api-access-mpsmp\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.228966 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4ef27e8-2f95-4794-a265-433ecf982772-secret-volume\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.229182 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx9xv\" (UniqueName: \"kubernetes.io/projected/438f4d3e-a816-40a9-9518-588b04476491-kube-api-access-dx9xv\") pod \"auto-csr-approver-29547840-c7fc5\" (UID: \"438f4d3e-a816-40a9-9518-588b04476491\") " pod="openshift-infra/auto-csr-approver-29547840-c7fc5" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.330594 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4ef27e8-2f95-4794-a265-433ecf982772-config-volume\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.330657 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpsmp\" (UniqueName: \"kubernetes.io/projected/b4ef27e8-2f95-4794-a265-433ecf982772-kube-api-access-mpsmp\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.330688 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4ef27e8-2f95-4794-a265-433ecf982772-secret-volume\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.330755 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx9xv\" (UniqueName: \"kubernetes.io/projected/438f4d3e-a816-40a9-9518-588b04476491-kube-api-access-dx9xv\") pod \"auto-csr-approver-29547840-c7fc5\" (UID: \"438f4d3e-a816-40a9-9518-588b04476491\") " pod="openshift-infra/auto-csr-approver-29547840-c7fc5" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.332614 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4ef27e8-2f95-4794-a265-433ecf982772-config-volume\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.339371 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4ef27e8-2f95-4794-a265-433ecf982772-secret-volume\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.349918 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx9xv\" (UniqueName: \"kubernetes.io/projected/438f4d3e-a816-40a9-9518-588b04476491-kube-api-access-dx9xv\") pod \"auto-csr-approver-29547840-c7fc5\" (UID: \"438f4d3e-a816-40a9-9518-588b04476491\") " pod="openshift-infra/auto-csr-approver-29547840-c7fc5" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.363433 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpsmp\" (UniqueName: \"kubernetes.io/projected/b4ef27e8-2f95-4794-a265-433ecf982772-kube-api-access-mpsmp\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.481534 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.498275 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547840-c7fc5" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.694807 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547840-c7fc5"] Mar 07 08:00:00 crc kubenswrapper[4761]: W0307 08:00:00.700896 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod438f4d3e_a816_40a9_9518_588b04476491.slice/crio-3b344aba4f1a4280bcbcdee75fc6c6ca5294fdd9b74f81f8e5119c33bd069b91 WatchSource:0}: Error finding container 3b344aba4f1a4280bcbcdee75fc6c6ca5294fdd9b74f81f8e5119c33bd069b91: Status 404 returned error can't find the container with id 3b344aba4f1a4280bcbcdee75fc6c6ca5294fdd9b74f81f8e5119c33bd069b91 Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.735905 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv"] Mar 07 08:00:00 crc kubenswrapper[4761]: W0307 08:00:00.741146 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4ef27e8_2f95_4794_a265_433ecf982772.slice/crio-fb7c5de3a5d1a59b95e780b9d5fdefd7933d845bca39e4c51a07b04f43099022 WatchSource:0}: Error finding container fb7c5de3a5d1a59b95e780b9d5fdefd7933d845bca39e4c51a07b04f43099022: Status 404 returned error can't find the container with id fb7c5de3a5d1a59b95e780b9d5fdefd7933d845bca39e4c51a07b04f43099022 Mar 07 08:00:01 crc kubenswrapper[4761]: I0307 08:00:01.192591 4761 generic.go:334] "Generic (PLEG): container finished" podID="b4ef27e8-2f95-4794-a265-433ecf982772" containerID="b3cf6b989ce07e65ba7db0ae4f80ce2dbf0060700b3790a4425415dd17be1577" exitCode=0 Mar 07 08:00:01 crc kubenswrapper[4761]: I0307 08:00:01.192660 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" event={"ID":"b4ef27e8-2f95-4794-a265-433ecf982772","Type":"ContainerDied","Data":"b3cf6b989ce07e65ba7db0ae4f80ce2dbf0060700b3790a4425415dd17be1577"} Mar 07 08:00:01 crc kubenswrapper[4761]: I0307 08:00:01.192690 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" event={"ID":"b4ef27e8-2f95-4794-a265-433ecf982772","Type":"ContainerStarted","Data":"fb7c5de3a5d1a59b95e780b9d5fdefd7933d845bca39e4c51a07b04f43099022"} Mar 07 08:00:01 crc kubenswrapper[4761]: I0307 08:00:01.195193 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547840-c7fc5" event={"ID":"438f4d3e-a816-40a9-9518-588b04476491","Type":"ContainerStarted","Data":"3b344aba4f1a4280bcbcdee75fc6c6ca5294fdd9b74f81f8e5119c33bd069b91"} Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.454737 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.561962 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4ef27e8-2f95-4794-a265-433ecf982772-secret-volume\") pod \"b4ef27e8-2f95-4794-a265-433ecf982772\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.562239 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpsmp\" (UniqueName: \"kubernetes.io/projected/b4ef27e8-2f95-4794-a265-433ecf982772-kube-api-access-mpsmp\") pod \"b4ef27e8-2f95-4794-a265-433ecf982772\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.562326 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4ef27e8-2f95-4794-a265-433ecf982772-config-volume\") pod \"b4ef27e8-2f95-4794-a265-433ecf982772\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.562992 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4ef27e8-2f95-4794-a265-433ecf982772-config-volume" (OuterVolumeSpecName: "config-volume") pod "b4ef27e8-2f95-4794-a265-433ecf982772" (UID: "b4ef27e8-2f95-4794-a265-433ecf982772"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.568002 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4ef27e8-2f95-4794-a265-433ecf982772-kube-api-access-mpsmp" (OuterVolumeSpecName: "kube-api-access-mpsmp") pod "b4ef27e8-2f95-4794-a265-433ecf982772" (UID: "b4ef27e8-2f95-4794-a265-433ecf982772"). InnerVolumeSpecName "kube-api-access-mpsmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.568588 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ef27e8-2f95-4794-a265-433ecf982772-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b4ef27e8-2f95-4794-a265-433ecf982772" (UID: "b4ef27e8-2f95-4794-a265-433ecf982772"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.663366 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4ef27e8-2f95-4794-a265-433ecf982772-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.663434 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpsmp\" (UniqueName: \"kubernetes.io/projected/b4ef27e8-2f95-4794-a265-433ecf982772-kube-api-access-mpsmp\") on node \"crc\" DevicePath \"\"" Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.663447 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4ef27e8-2f95-4794-a265-433ecf982772-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:00:03 crc kubenswrapper[4761]: I0307 08:00:03.215225 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" event={"ID":"b4ef27e8-2f95-4794-a265-433ecf982772","Type":"ContainerDied","Data":"fb7c5de3a5d1a59b95e780b9d5fdefd7933d845bca39e4c51a07b04f43099022"} Mar 07 08:00:03 crc kubenswrapper[4761]: I0307 08:00:03.215304 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb7c5de3a5d1a59b95e780b9d5fdefd7933d845bca39e4c51a07b04f43099022" Mar 07 08:00:03 crc kubenswrapper[4761]: I0307 08:00:03.215327 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:13 crc kubenswrapper[4761]: I0307 08:00:13.768107 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:00:13 crc kubenswrapper[4761]: I0307 08:00:13.768770 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:00:19 crc kubenswrapper[4761]: I0307 08:00:19.345224 4761 generic.go:334] "Generic (PLEG): container finished" podID="438f4d3e-a816-40a9-9518-588b04476491" containerID="5963452c1289655e1fa326e8a7200c203507ffba57d60c3182b659ac7a387bdb" exitCode=0 Mar 07 08:00:19 crc kubenswrapper[4761]: I0307 08:00:19.345314 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547840-c7fc5" event={"ID":"438f4d3e-a816-40a9-9518-588b04476491","Type":"ContainerDied","Data":"5963452c1289655e1fa326e8a7200c203507ffba57d60c3182b659ac7a387bdb"} Mar 07 08:00:20 crc kubenswrapper[4761]: I0307 08:00:20.719621 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547840-c7fc5" Mar 07 08:00:20 crc kubenswrapper[4761]: I0307 08:00:20.864863 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx9xv\" (UniqueName: \"kubernetes.io/projected/438f4d3e-a816-40a9-9518-588b04476491-kube-api-access-dx9xv\") pod \"438f4d3e-a816-40a9-9518-588b04476491\" (UID: \"438f4d3e-a816-40a9-9518-588b04476491\") " Mar 07 08:00:20 crc kubenswrapper[4761]: I0307 08:00:20.882798 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438f4d3e-a816-40a9-9518-588b04476491-kube-api-access-dx9xv" (OuterVolumeSpecName: "kube-api-access-dx9xv") pod "438f4d3e-a816-40a9-9518-588b04476491" (UID: "438f4d3e-a816-40a9-9518-588b04476491"). InnerVolumeSpecName "kube-api-access-dx9xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:00:20 crc kubenswrapper[4761]: I0307 08:00:20.966558 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx9xv\" (UniqueName: \"kubernetes.io/projected/438f4d3e-a816-40a9-9518-588b04476491-kube-api-access-dx9xv\") on node \"crc\" DevicePath \"\"" Mar 07 08:00:21 crc kubenswrapper[4761]: I0307 08:00:21.362336 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547840-c7fc5" event={"ID":"438f4d3e-a816-40a9-9518-588b04476491","Type":"ContainerDied","Data":"3b344aba4f1a4280bcbcdee75fc6c6ca5294fdd9b74f81f8e5119c33bd069b91"} Mar 07 08:00:21 crc kubenswrapper[4761]: I0307 08:00:21.362374 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547840-c7fc5" Mar 07 08:00:21 crc kubenswrapper[4761]: I0307 08:00:21.362385 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b344aba4f1a4280bcbcdee75fc6c6ca5294fdd9b74f81f8e5119c33bd069b91" Mar 07 08:00:21 crc kubenswrapper[4761]: I0307 08:00:21.803136 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547834-vbflv"] Mar 07 08:00:21 crc kubenswrapper[4761]: I0307 08:00:21.811465 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547834-vbflv"] Mar 07 08:00:23 crc kubenswrapper[4761]: I0307 08:00:23.718766 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44149f32-4111-4706-977e-411d6011bb02" path="/var/lib/kubelet/pods/44149f32-4111-4706-977e-411d6011bb02/volumes" Mar 07 08:00:41 crc kubenswrapper[4761]: I0307 08:00:41.495158 4761 scope.go:117] "RemoveContainer" containerID="829cbe3ab09ee538f0ac491499b1b8d9f6872046415226f166160c3c514103af" Mar 07 08:00:43 crc kubenswrapper[4761]: I0307 08:00:43.768854 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:00:43 crc kubenswrapper[4761]: I0307 08:00:43.769341 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.237163 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w"] Mar 07 08:01:00 crc kubenswrapper[4761]: E0307 08:01:00.238952 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ef27e8-2f95-4794-a265-433ecf982772" containerName="collect-profiles" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.239066 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ef27e8-2f95-4794-a265-433ecf982772" containerName="collect-profiles" Mar 07 08:01:00 crc kubenswrapper[4761]: E0307 08:01:00.239151 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438f4d3e-a816-40a9-9518-588b04476491" containerName="oc" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.239230 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="438f4d3e-a816-40a9-9518-588b04476491" containerName="oc" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.239463 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4ef27e8-2f95-4794-a265-433ecf982772" containerName="collect-profiles" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.239552 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="438f4d3e-a816-40a9-9518-588b04476491" containerName="oc" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.240593 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.243679 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.255637 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w"] Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.422833 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.422900 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnxxc\" (UniqueName: \"kubernetes.io/projected/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-kube-api-access-tnxxc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.422926 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.524681 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.524862 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnxxc\" (UniqueName: \"kubernetes.io/projected/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-kube-api-access-tnxxc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.524921 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.525629 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.525772 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.557862 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnxxc\" (UniqueName: \"kubernetes.io/projected/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-kube-api-access-tnxxc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.560667 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.791829 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w"] Mar 07 08:01:00 crc kubenswrapper[4761]: W0307 08:01:00.798192 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ae4ef8d_9fdc_48d8_ac9c_ed0f896a6de4.slice/crio-782545d9cd0a2228ce5ac344a05246a708af9f21071ec0a6b4746bca281769bf WatchSource:0}: Error finding container 782545d9cd0a2228ce5ac344a05246a708af9f21071ec0a6b4746bca281769bf: Status 404 returned error can't find the container with id 782545d9cd0a2228ce5ac344a05246a708af9f21071ec0a6b4746bca281769bf Mar 07 08:01:01 crc kubenswrapper[4761]: I0307 08:01:01.670841 4761 generic.go:334] "Generic (PLEG): container finished" podID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerID="432b6b29560a571a583f65e7c143398bfe71cb38b1929ffc932b0449f481d796" exitCode=0 Mar 07 08:01:01 crc kubenswrapper[4761]: I0307 08:01:01.670926 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" event={"ID":"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4","Type":"ContainerDied","Data":"432b6b29560a571a583f65e7c143398bfe71cb38b1929ffc932b0449f481d796"} Mar 07 08:01:01 crc kubenswrapper[4761]: I0307 08:01:01.671166 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" event={"ID":"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4","Type":"ContainerStarted","Data":"782545d9cd0a2228ce5ac344a05246a708af9f21071ec0a6b4746bca281769bf"} Mar 07 08:01:03 crc kubenswrapper[4761]: I0307 08:01:03.691691 4761 generic.go:334] "Generic (PLEG): container finished" podID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerID="cdad18058978a021730080b8b3e6f037396b1a050bf11191c09530e50b971f8d" exitCode=0 Mar 07 08:01:03 crc kubenswrapper[4761]: I0307 08:01:03.691793 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" event={"ID":"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4","Type":"ContainerDied","Data":"cdad18058978a021730080b8b3e6f037396b1a050bf11191c09530e50b971f8d"} Mar 07 08:01:04 crc kubenswrapper[4761]: I0307 08:01:04.701740 4761 generic.go:334] "Generic (PLEG): container finished" podID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerID="ebdf2af576a4e4f807669bb1016ceddcfe5f3d8495c1f5394f5e713fd1f23ba0" exitCode=0 Mar 07 08:01:04 crc kubenswrapper[4761]: I0307 08:01:04.702029 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" event={"ID":"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4","Type":"ContainerDied","Data":"ebdf2af576a4e4f807669bb1016ceddcfe5f3d8495c1f5394f5e713fd1f23ba0"} Mar 07 08:01:05 crc kubenswrapper[4761]: I0307 08:01:05.925093 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.015366 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-bundle\") pod \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.015511 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-util\") pod \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.015542 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnxxc\" (UniqueName: \"kubernetes.io/projected/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-kube-api-access-tnxxc\") pod \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.017191 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-bundle" (OuterVolumeSpecName: "bundle") pod "9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" (UID: "9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.023927 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-kube-api-access-tnxxc" (OuterVolumeSpecName: "kube-api-access-tnxxc") pod "9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" (UID: "9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4"). InnerVolumeSpecName "kube-api-access-tnxxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.118262 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnxxc\" (UniqueName: \"kubernetes.io/projected/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-kube-api-access-tnxxc\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.118331 4761 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.371477 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-util" (OuterVolumeSpecName: "util") pod "9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" (UID: "9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.424274 4761 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-util\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.717019 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" event={"ID":"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4","Type":"ContainerDied","Data":"782545d9cd0a2228ce5ac344a05246a708af9f21071ec0a6b4746bca281769bf"} Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.717080 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="782545d9cd0a2228ce5ac344a05246a708af9f21071ec0a6b4746bca281769bf" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.717084 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.389306 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9zpnq"] Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.390102 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-controller" containerID="cri-o://f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a" gracePeriod=30 Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.390152 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="nbdb" containerID="cri-o://1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269" gracePeriod=30 Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.390221 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="sbdb" containerID="cri-o://8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40" gracePeriod=30 Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.390237 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-node" containerID="cri-o://60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383" gracePeriod=30 Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.390251 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="northd" containerID="cri-o://34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b" gracePeriod=30 Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.390212 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59" gracePeriod=30 Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.390263 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-acl-logging" containerID="cri-o://963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9" gracePeriod=30 Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.444904 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovnkube-controller" containerID="cri-o://59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546" gracePeriod=30 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.206459 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9zpnq_19ab486f-60a2-4522-a589-79b4c4375e53/ovn-acl-logging/0.log" Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207429 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9zpnq_19ab486f-60a2-4522-a589-79b4c4375e53/ovn-controller/0.log" Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207874 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546" exitCode=0 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207915 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40" exitCode=0 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207928 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269" exitCode=0 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207942 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b" exitCode=0 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207956 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9" exitCode=143 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207968 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a" exitCode=143 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207958 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546"} Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.208020 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40"} Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.208044 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269"} Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.208102 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b"} Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.208121 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9"} Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.208179 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a"} Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.209743 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d7fhg_e012dce7-a788-4dab-b758-5ace07b2c150/kube-multus/0.log" Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.209785 4761 generic.go:334] "Generic (PLEG): container finished" podID="e012dce7-a788-4dab-b758-5ace07b2c150" containerID="ade39212f5be5eba8c4c503357adbd943542b70dcf1e4a7b7f089a8ddaaf64f5" exitCode=2 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.209818 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7fhg" event={"ID":"e012dce7-a788-4dab-b758-5ace07b2c150","Type":"ContainerDied","Data":"ade39212f5be5eba8c4c503357adbd943542b70dcf1e4a7b7f089a8ddaaf64f5"} Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.210429 4761 scope.go:117] "RemoveContainer" containerID="ade39212f5be5eba8c4c503357adbd943542b70dcf1e4a7b7f089a8ddaaf64f5" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.093680 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9zpnq_19ab486f-60a2-4522-a589-79b4c4375e53/ovn-acl-logging/0.log" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.094362 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9zpnq_19ab486f-60a2-4522-a589-79b4c4375e53/ovn-controller/0.log" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.094746 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.140514 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ftgtl"] Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.140892 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-acl-logging" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.140915 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-acl-logging" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.140924 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="sbdb" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.140932 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="sbdb" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.140949 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerName="util" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.140960 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerName="util" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.140971 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kubecfg-setup" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.140978 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kubecfg-setup" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.140987 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-node" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.140994 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-node" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.141004 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-ovn-metrics" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141012 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-ovn-metrics" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.141023 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerName="pull" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141029 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerName="pull" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.141040 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="northd" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141047 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="northd" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.141055 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovnkube-controller" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141063 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovnkube-controller" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.141074 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerName="extract" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141081 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerName="extract" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.141096 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-controller" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141103 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-controller" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.141115 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="nbdb" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141122 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="nbdb" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141239 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-controller" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141255 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-node" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141266 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-ovn-metrics" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141275 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovnkube-controller" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141285 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="northd" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141293 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-acl-logging" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141305 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="sbdb" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141316 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerName="extract" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141332 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="nbdb" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.143877 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207447 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-env-overrides\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207507 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-etc-openvswitch\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207547 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-slash\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207623 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207681 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-slash" (OuterVolumeSpecName: "host-slash") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207917 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-systemd-units\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207924 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207967 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-config\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207979 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208009 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19ab486f-60a2-4522-a589-79b4c4375e53-ovn-node-metrics-cert\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208048 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-ovn\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208070 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-bin\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208091 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-node-log\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208089 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208128 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208112 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-node-log" (OuterVolumeSpecName: "node-log") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208124 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5l7k\" (UniqueName: \"kubernetes.io/projected/19ab486f-60a2-4522-a589-79b4c4375e53-kube-api-access-n5l7k\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208260 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-kubelet\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208291 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208298 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-netns\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208308 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208319 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208336 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-systemd\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208375 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-ovn-kubernetes\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208393 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-var-lib-cni-networks-ovn-kubernetes\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208421 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-var-lib-openvswitch\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208442 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-openvswitch\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208459 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208466 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-log-socket\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208485 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-netd\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208526 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-script-lib\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208483 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208501 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-log-socket" (OuterVolumeSpecName: "log-socket") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208542 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208504 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208450 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208729 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-node-log\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209023 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-env-overrides\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209076 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htzpw\" (UniqueName: \"kubernetes.io/projected/cf490489-7ff3-48aa-a8d4-276077bcea1b-kube-api-access-htzpw\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209208 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-systemd\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209276 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-etc-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209313 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovnkube-config\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209336 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-run-netns\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209377 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209422 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-cni-bin\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209454 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209476 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-systemd-units\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209506 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovnkube-script-lib\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209533 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-ovn\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209577 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-var-lib-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209617 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-run-ovn-kubernetes\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209652 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-slash\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209686 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-kubelet\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209729 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-log-socket\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209760 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-cni-netd\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209792 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovn-node-metrics-cert\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209860 4761 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209877 4761 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-slash\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209889 4761 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209899 4761 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209912 4761 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209924 4761 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-node-log\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209950 4761 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209963 4761 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209973 4761 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209984 4761 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209997 4761 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.210009 4761 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.210019 4761 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.210029 4761 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-log-socket\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.210040 4761 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.210051 4761 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.210691 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.216375 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19ab486f-60a2-4522-a589-79b4c4375e53-kube-api-access-n5l7k" (OuterVolumeSpecName: "kube-api-access-n5l7k") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "kube-api-access-n5l7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.216710 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19ab486f-60a2-4522-a589-79b4c4375e53-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.222434 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d7fhg_e012dce7-a788-4dab-b758-5ace07b2c150/kube-multus/0.log" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.222561 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7fhg" event={"ID":"e012dce7-a788-4dab-b758-5ace07b2c150","Type":"ContainerStarted","Data":"1602586b1b8d667b172fae1f5bd1a5d79fb29f1a7541185c4b36078e6325864f"} Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.225033 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.231604 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9zpnq_19ab486f-60a2-4522-a589-79b4c4375e53/ovn-acl-logging/0.log" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.232234 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9zpnq_19ab486f-60a2-4522-a589-79b4c4375e53/ovn-controller/0.log" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.232650 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59" exitCode=0 Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.232678 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383" exitCode=0 Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.232699 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59"} Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.232773 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383"} Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.232786 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"75ce9a667bf5bdb687aaa63e45963644ed7766516d86b9aab3ce1f1bcd7454dd"} Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.232819 4761 scope.go:117] "RemoveContainer" containerID="59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.233022 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.269854 4761 scope.go:117] "RemoveContainer" containerID="8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312544 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-etc-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312585 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovnkube-config\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312603 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-run-netns\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312624 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312647 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-cni-bin\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312664 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312683 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-systemd-units\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312701 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovnkube-script-lib\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312743 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-ovn\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312768 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-var-lib-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312790 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-run-ovn-kubernetes\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312809 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-slash\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312828 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-kubelet\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312844 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-log-socket\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312871 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-cni-netd\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312900 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovn-node-metrics-cert\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312920 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-node-log\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312935 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-env-overrides\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312961 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htzpw\" (UniqueName: \"kubernetes.io/projected/cf490489-7ff3-48aa-a8d4-276077bcea1b-kube-api-access-htzpw\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313002 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-systemd\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313045 4761 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313057 4761 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19ab486f-60a2-4522-a589-79b4c4375e53-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313070 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5l7k\" (UniqueName: \"kubernetes.io/projected/19ab486f-60a2-4522-a589-79b4c4375e53-kube-api-access-n5l7k\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313082 4761 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313119 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-systemd\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313157 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-etc-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313770 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovnkube-config\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313810 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-run-netns\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313833 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313858 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-cni-bin\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313880 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313902 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-systemd-units\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314331 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovnkube-script-lib\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314363 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-ovn\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314386 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-var-lib-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314408 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-run-ovn-kubernetes\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314429 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-slash\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314452 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-kubelet\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314474 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-log-socket\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314627 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-cni-netd\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.315316 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-env-overrides\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.315390 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-node-log\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.318856 4761 scope.go:117] "RemoveContainer" containerID="1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.319263 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovn-node-metrics-cert\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.346928 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9zpnq"] Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.356774 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9zpnq"] Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.461636 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htzpw\" (UniqueName: \"kubernetes.io/projected/cf490489-7ff3-48aa-a8d4-276077bcea1b-kube-api-access-htzpw\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.463976 4761 scope.go:117] "RemoveContainer" containerID="34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.479194 4761 scope.go:117] "RemoveContainer" containerID="d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.496396 4761 scope.go:117] "RemoveContainer" containerID="60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.510424 4761 scope.go:117] "RemoveContainer" containerID="963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.536554 4761 scope.go:117] "RemoveContainer" containerID="f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.550946 4761 scope.go:117] "RemoveContainer" containerID="bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.564328 4761 scope.go:117] "RemoveContainer" containerID="59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.564621 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546\": container with ID starting with 59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546 not found: ID does not exist" containerID="59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.564653 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546"} err="failed to get container status \"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546\": rpc error: code = NotFound desc = could not find container \"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546\": container with ID starting with 59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.564671 4761 scope.go:117] "RemoveContainer" containerID="8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.564942 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40\": container with ID starting with 8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40 not found: ID does not exist" containerID="8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.565029 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40"} err="failed to get container status \"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40\": rpc error: code = NotFound desc = could not find container \"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40\": container with ID starting with 8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.565101 4761 scope.go:117] "RemoveContainer" containerID="1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.565439 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269\": container with ID starting with 1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269 not found: ID does not exist" containerID="1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.565469 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269"} err="failed to get container status \"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269\": rpc error: code = NotFound desc = could not find container \"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269\": container with ID starting with 1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.565490 4761 scope.go:117] "RemoveContainer" containerID="34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.565737 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b\": container with ID starting with 34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b not found: ID does not exist" containerID="34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.565764 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b"} err="failed to get container status \"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b\": rpc error: code = NotFound desc = could not find container \"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b\": container with ID starting with 34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.565782 4761 scope.go:117] "RemoveContainer" containerID="d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.566014 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59\": container with ID starting with d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59 not found: ID does not exist" containerID="d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.566110 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59"} err="failed to get container status \"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59\": rpc error: code = NotFound desc = could not find container \"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59\": container with ID starting with d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.566208 4761 scope.go:117] "RemoveContainer" containerID="60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.566688 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383\": container with ID starting with 60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383 not found: ID does not exist" containerID="60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.566788 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383"} err="failed to get container status \"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383\": rpc error: code = NotFound desc = could not find container \"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383\": container with ID starting with 60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.566859 4761 scope.go:117] "RemoveContainer" containerID="963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.567154 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9\": container with ID starting with 963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9 not found: ID does not exist" containerID="963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.567191 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9"} err="failed to get container status \"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9\": rpc error: code = NotFound desc = could not find container \"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9\": container with ID starting with 963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.567217 4761 scope.go:117] "RemoveContainer" containerID="f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.567549 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a\": container with ID starting with f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a not found: ID does not exist" containerID="f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.567646 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a"} err="failed to get container status \"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a\": rpc error: code = NotFound desc = could not find container \"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a\": container with ID starting with f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.567736 4761 scope.go:117] "RemoveContainer" containerID="bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.568064 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455\": container with ID starting with bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455 not found: ID does not exist" containerID="bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.568137 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455"} err="failed to get container status \"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455\": rpc error: code = NotFound desc = could not find container \"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455\": container with ID starting with bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.568198 4761 scope.go:117] "RemoveContainer" containerID="59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.568513 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546"} err="failed to get container status \"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546\": rpc error: code = NotFound desc = could not find container \"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546\": container with ID starting with 59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.568582 4761 scope.go:117] "RemoveContainer" containerID="8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.568882 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40"} err="failed to get container status \"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40\": rpc error: code = NotFound desc = could not find container \"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40\": container with ID starting with 8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.568952 4761 scope.go:117] "RemoveContainer" containerID="1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.569217 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269"} err="failed to get container status \"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269\": rpc error: code = NotFound desc = could not find container \"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269\": container with ID starting with 1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.569252 4761 scope.go:117] "RemoveContainer" containerID="34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.569541 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b"} err="failed to get container status \"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b\": rpc error: code = NotFound desc = could not find container \"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b\": container with ID starting with 34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.569609 4761 scope.go:117] "RemoveContainer" containerID="d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.570111 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59"} err="failed to get container status \"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59\": rpc error: code = NotFound desc = could not find container \"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59\": container with ID starting with d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.570155 4761 scope.go:117] "RemoveContainer" containerID="60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.570417 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383"} err="failed to get container status \"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383\": rpc error: code = NotFound desc = could not find container \"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383\": container with ID starting with 60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.570496 4761 scope.go:117] "RemoveContainer" containerID="963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.570855 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9"} err="failed to get container status \"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9\": rpc error: code = NotFound desc = could not find container \"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9\": container with ID starting with 963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.570885 4761 scope.go:117] "RemoveContainer" containerID="f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.571124 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a"} err="failed to get container status \"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a\": rpc error: code = NotFound desc = could not find container \"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a\": container with ID starting with f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.571142 4761 scope.go:117] "RemoveContainer" containerID="bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.571372 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455"} err="failed to get container status \"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455\": rpc error: code = NotFound desc = could not find container \"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455\": container with ID starting with bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.713314 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" path="/var/lib/kubelet/pods/19ab486f-60a2-4522-a589-79b4c4375e53/volumes" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.755685 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.769336 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.769392 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.769438 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.769995 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e56717fa60308e8f622ec33776708c4b00d9ccd7a8ad0a18a994be6b41d32a1"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.770049 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://4e56717fa60308e8f622ec33776708c4b00d9ccd7a8ad0a18a994be6b41d32a1" gracePeriod=600 Mar 07 08:01:14 crc kubenswrapper[4761]: I0307 08:01:14.239665 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="4e56717fa60308e8f622ec33776708c4b00d9ccd7a8ad0a18a994be6b41d32a1" exitCode=0 Mar 07 08:01:14 crc kubenswrapper[4761]: I0307 08:01:14.239757 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"4e56717fa60308e8f622ec33776708c4b00d9ccd7a8ad0a18a994be6b41d32a1"} Mar 07 08:01:14 crc kubenswrapper[4761]: I0307 08:01:14.240122 4761 scope.go:117] "RemoveContainer" containerID="99999bd284e69fd9faa6103a00d03a466d499b9bac79905f9b3132ce0f479790" Mar 07 08:01:14 crc kubenswrapper[4761]: I0307 08:01:14.240449 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"c1d761b7f5e7692b9893671098d197b8b035ee46f61a8e0511bcc06bc73f8c8f"} Mar 07 08:01:14 crc kubenswrapper[4761]: I0307 08:01:14.242861 4761 generic.go:334] "Generic (PLEG): container finished" podID="cf490489-7ff3-48aa-a8d4-276077bcea1b" containerID="816b89552d8a9067da82ae00011acf1b35ab4152113a9c74b8c24b1526c82c7a" exitCode=0 Mar 07 08:01:14 crc kubenswrapper[4761]: I0307 08:01:14.242933 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerDied","Data":"816b89552d8a9067da82ae00011acf1b35ab4152113a9c74b8c24b1526c82c7a"} Mar 07 08:01:14 crc kubenswrapper[4761]: I0307 08:01:14.242981 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"f01808d3c8ce1b3aced908063bc4dc33395ed54d6a75d5f697d50546e8649c67"} Mar 07 08:01:15 crc kubenswrapper[4761]: I0307 08:01:15.252128 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"ef3e9106a607a64cc45754a48a931e495afcb569951e9edbf6a070f7fd69cf64"} Mar 07 08:01:15 crc kubenswrapper[4761]: I0307 08:01:15.252508 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"488ded558ca9fe66cd9e5a89e13072ad6d0e6a1da742a9f942c9dedf96932927"} Mar 07 08:01:15 crc kubenswrapper[4761]: I0307 08:01:15.252519 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"a24ef4d8b74651e6761fb5913b8cb7b493c37f2c168fa130fb8a05245541a812"} Mar 07 08:01:15 crc kubenswrapper[4761]: I0307 08:01:15.252528 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"a1e2b3ee83a605806219a18f0bdab914f188cf094eefd9341f2e5e11fc2b185d"} Mar 07 08:01:15 crc kubenswrapper[4761]: I0307 08:01:15.252536 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"501c89b2a1dde3c7f8ae0f68035efc553b9fe50472f836758f9c7b7977f1cd79"} Mar 07 08:01:15 crc kubenswrapper[4761]: I0307 08:01:15.252545 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"4029228fe2dde504e87e75f1ef10028b75909a845e72b4a1828adf03513fceb0"} Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.291260 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"19acedcf08ec30f9f544380193e40ed3b8666425eef197006bc8565a775ed285"} Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.345848 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9"] Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.346739 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.351018 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.351290 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-7pjs6" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.351429 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.373381 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpclt\" (UniqueName: \"kubernetes.io/projected/40c12f82-6c14-4659-80c5-ab38e649706a-kube-api-access-dpclt\") pod \"obo-prometheus-operator-68bc856cb9-hftl9\" (UID: \"40c12f82-6c14-4659-80c5-ab38e649706a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.474493 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpclt\" (UniqueName: \"kubernetes.io/projected/40c12f82-6c14-4659-80c5-ab38e649706a-kube-api-access-dpclt\") pod \"obo-prometheus-operator-68bc856cb9-hftl9\" (UID: \"40c12f82-6c14-4659-80c5-ab38e649706a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.475728 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w"] Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.476514 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.478025 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-qmdzm" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.478027 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.489469 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps"] Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.490138 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.495871 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpclt\" (UniqueName: \"kubernetes.io/projected/40c12f82-6c14-4659-80c5-ab38e649706a-kube-api-access-dpclt\") pod \"obo-prometheus-operator-68bc856cb9-hftl9\" (UID: \"40c12f82-6c14-4659-80c5-ab38e649706a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.575495 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps\" (UID: \"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.575547 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps\" (UID: \"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.575581 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60fad35f-402e-4c65-a097-a836c5692479-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w\" (UID: \"60fad35f-402e-4c65-a097-a836c5692479\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.575639 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60fad35f-402e-4c65-a097-a836c5692479-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w\" (UID: \"60fad35f-402e-4c65-a097-a836c5692479\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.662210 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.676386 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps\" (UID: \"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.676426 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps\" (UID: \"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.676451 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60fad35f-402e-4c65-a097-a836c5692479-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w\" (UID: \"60fad35f-402e-4c65-a097-a836c5692479\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.676479 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60fad35f-402e-4c65-a097-a836c5692479-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w\" (UID: \"60fad35f-402e-4c65-a097-a836c5692479\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.679640 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps\" (UID: \"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.685751 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60fad35f-402e-4c65-a097-a836c5692479-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w\" (UID: \"60fad35f-402e-4c65-a097-a836c5692479\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.696324 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps\" (UID: \"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.698909 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kfph9"] Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.699935 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.709129 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-7ffjc" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.709305 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.716236 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60fad35f-402e-4c65-a097-a836c5692479-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w\" (UID: \"60fad35f-402e-4c65-a097-a836c5692479\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.742577 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(8bc568f8f2c4561d10656bc4af31e35bd3120823dfe83e7a874b0f573f569d63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.742651 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(8bc568f8f2c4561d10656bc4af31e35bd3120823dfe83e7a874b0f573f569d63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.742679 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(8bc568f8f2c4561d10656bc4af31e35bd3120823dfe83e7a874b0f573f569d63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.742791 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators(40c12f82-6c14-4659-80c5-ab38e649706a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators(40c12f82-6c14-4659-80c5-ab38e649706a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(8bc568f8f2c4561d10656bc4af31e35bd3120823dfe83e7a874b0f573f569d63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" podUID="40c12f82-6c14-4659-80c5-ab38e649706a" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.778298 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b17d76c5-b5d9-4f79-841e-287d05540b40-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kfph9\" (UID: \"b17d76c5-b5d9-4f79-841e-287d05540b40\") " pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.778350 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vswc4\" (UniqueName: \"kubernetes.io/projected/b17d76c5-b5d9-4f79-841e-287d05540b40-kube-api-access-vswc4\") pod \"observability-operator-59bdc8b94-kfph9\" (UID: \"b17d76c5-b5d9-4f79-841e-287d05540b40\") " pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.792209 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.818334 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(4fc5ec532aef52df0906cfd3ff69a1c70a5b1c434aca84adf94a844441041bd0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.818472 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(4fc5ec532aef52df0906cfd3ff69a1c70a5b1c434aca84adf94a844441041bd0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.818550 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(4fc5ec532aef52df0906cfd3ff69a1c70a5b1c434aca84adf94a844441041bd0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.818661 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators(60fad35f-402e-4c65-a097-a836c5692479)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators(60fad35f-402e-4c65-a097-a836c5692479)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(4fc5ec532aef52df0906cfd3ff69a1c70a5b1c434aca84adf94a844441041bd0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" podUID="60fad35f-402e-4c65-a097-a836c5692479" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.835283 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.860207 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(c00779fad1f7209caa54c4e700b8e7e01941762302b4774cf365e210b888ba9e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.860283 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(c00779fad1f7209caa54c4e700b8e7e01941762302b4774cf365e210b888ba9e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.860308 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(c00779fad1f7209caa54c4e700b8e7e01941762302b4774cf365e210b888ba9e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.864665 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators(7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators(7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(c00779fad1f7209caa54c4e700b8e7e01941762302b4774cf365e210b888ba9e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" podUID="7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.875342 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-4l52t"] Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.876096 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.879157 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b17d76c5-b5d9-4f79-841e-287d05540b40-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kfph9\" (UID: \"b17d76c5-b5d9-4f79-841e-287d05540b40\") " pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.879207 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-572lx\" (UniqueName: \"kubernetes.io/projected/0c90c3e5-de84-4cb1-ac22-fe02ca708196-kube-api-access-572lx\") pod \"perses-operator-5bf474d74f-4l52t\" (UID: \"0c90c3e5-de84-4cb1-ac22-fe02ca708196\") " pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.879228 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vswc4\" (UniqueName: \"kubernetes.io/projected/b17d76c5-b5d9-4f79-841e-287d05540b40-kube-api-access-vswc4\") pod \"observability-operator-59bdc8b94-kfph9\" (UID: \"b17d76c5-b5d9-4f79-841e-287d05540b40\") " pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.879259 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90c3e5-de84-4cb1-ac22-fe02ca708196-openshift-service-ca\") pod \"perses-operator-5bf474d74f-4l52t\" (UID: \"0c90c3e5-de84-4cb1-ac22-fe02ca708196\") " pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.882192 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b17d76c5-b5d9-4f79-841e-287d05540b40-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kfph9\" (UID: \"b17d76c5-b5d9-4f79-841e-287d05540b40\") " pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.883647 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-ccs2n" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.903460 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vswc4\" (UniqueName: \"kubernetes.io/projected/b17d76c5-b5d9-4f79-841e-287d05540b40-kube-api-access-vswc4\") pod \"observability-operator-59bdc8b94-kfph9\" (UID: \"b17d76c5-b5d9-4f79-841e-287d05540b40\") " pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.979958 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90c3e5-de84-4cb1-ac22-fe02ca708196-openshift-service-ca\") pod \"perses-operator-5bf474d74f-4l52t\" (UID: \"0c90c3e5-de84-4cb1-ac22-fe02ca708196\") " pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.980240 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-572lx\" (UniqueName: \"kubernetes.io/projected/0c90c3e5-de84-4cb1-ac22-fe02ca708196-kube-api-access-572lx\") pod \"perses-operator-5bf474d74f-4l52t\" (UID: \"0c90c3e5-de84-4cb1-ac22-fe02ca708196\") " pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.981525 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90c3e5-de84-4cb1-ac22-fe02ca708196-openshift-service-ca\") pod \"perses-operator-5bf474d74f-4l52t\" (UID: \"0c90c3e5-de84-4cb1-ac22-fe02ca708196\") " pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.996876 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-572lx\" (UniqueName: \"kubernetes.io/projected/0c90c3e5-de84-4cb1-ac22-fe02ca708196-kube-api-access-572lx\") pod \"perses-operator-5bf474d74f-4l52t\" (UID: \"0c90c3e5-de84-4cb1-ac22-fe02ca708196\") " pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:19 crc kubenswrapper[4761]: I0307 08:01:19.073924 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.098053 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(606b9abb131970f4d596d57316c6eadb341dfcfa19d2b857aa97cf146c7815d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.098150 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(606b9abb131970f4d596d57316c6eadb341dfcfa19d2b857aa97cf146c7815d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.098185 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(606b9abb131970f4d596d57316c6eadb341dfcfa19d2b857aa97cf146c7815d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.098254 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-kfph9_openshift-operators(b17d76c5-b5d9-4f79-841e-287d05540b40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-kfph9_openshift-operators(b17d76c5-b5d9-4f79-841e-287d05540b40)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(606b9abb131970f4d596d57316c6eadb341dfcfa19d2b857aa97cf146c7815d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" podUID="b17d76c5-b5d9-4f79-841e-287d05540b40" Mar 07 08:01:19 crc kubenswrapper[4761]: I0307 08:01:19.190088 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.221562 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(78d84bb2956110706a0708d837fed3a02b600eacb086dbca443ee2f16bb85cd9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.221626 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(78d84bb2956110706a0708d837fed3a02b600eacb086dbca443ee2f16bb85cd9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.221652 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(78d84bb2956110706a0708d837fed3a02b600eacb086dbca443ee2f16bb85cd9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.221698 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-4l52t_openshift-operators(0c90c3e5-de84-4cb1-ac22-fe02ca708196)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-4l52t_openshift-operators(0c90c3e5-de84-4cb1-ac22-fe02ca708196)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(78d84bb2956110706a0708d837fed3a02b600eacb086dbca443ee2f16bb85cd9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" podUID="0c90c3e5-de84-4cb1-ac22-fe02ca708196" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.306640 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"0a6f6e6b067f1fe41e1ef27005e633f16cf933d697db2c17b2c77a3faa680533"} Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.307147 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.307241 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.307257 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.362093 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" podStartSLOduration=7.362065512 podStartE2EDuration="7.362065512s" podCreationTimestamp="2026-03-07 08:01:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:01:20.353611771 +0000 UTC m=+737.262778246" watchObservedRunningTime="2026-03-07 08:01:20.362065512 +0000 UTC m=+737.271231987" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.375635 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.414362 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.962569 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps"] Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.962689 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.963295 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.998281 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w"] Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.998409 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.998866 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.003502 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kfph9"] Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.003860 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.004343 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.024918 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(87968ca6f42677b95f14782d428a8ebd9fcb12d0f94ed6db82d8e5e7b5c0ca73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.024991 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(87968ca6f42677b95f14782d428a8ebd9fcb12d0f94ed6db82d8e5e7b5c0ca73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.024998 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-4l52t"] Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.025160 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.025897 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.025014 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(87968ca6f42677b95f14782d428a8ebd9fcb12d0f94ed6db82d8e5e7b5c0ca73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.026263 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators(7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators(7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(87968ca6f42677b95f14782d428a8ebd9fcb12d0f94ed6db82d8e5e7b5c0ca73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" podUID="7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6" Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.029542 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9"] Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.029647 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.030086 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.048950 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(51a2401f190330de1a0cfe0c3595a897fc3e98e3da07efb9dac83efe938e368c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.049021 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(51a2401f190330de1a0cfe0c3595a897fc3e98e3da07efb9dac83efe938e368c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.049044 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(51a2401f190330de1a0cfe0c3595a897fc3e98e3da07efb9dac83efe938e368c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.049095 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators(60fad35f-402e-4c65-a097-a836c5692479)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators(60fad35f-402e-4c65-a097-a836c5692479)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(51a2401f190330de1a0cfe0c3595a897fc3e98e3da07efb9dac83efe938e368c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" podUID="60fad35f-402e-4c65-a097-a836c5692479" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.095862 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(00bdd4b201e3ba650076114faeb890799317b115847b19a20e29508b83b91657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.095948 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(00bdd4b201e3ba650076114faeb890799317b115847b19a20e29508b83b91657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.095984 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(00bdd4b201e3ba650076114faeb890799317b115847b19a20e29508b83b91657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.096060 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-kfph9_openshift-operators(b17d76c5-b5d9-4f79-841e-287d05540b40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-kfph9_openshift-operators(b17d76c5-b5d9-4f79-841e-287d05540b40)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(00bdd4b201e3ba650076114faeb890799317b115847b19a20e29508b83b91657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" podUID="b17d76c5-b5d9-4f79-841e-287d05540b40" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.119354 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(1ec03f041ab44d9b9c841db23656b2c73d28404e01d369b496b91fc1bf8116d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.119420 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(1ec03f041ab44d9b9c841db23656b2c73d28404e01d369b496b91fc1bf8116d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.119440 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(1ec03f041ab44d9b9c841db23656b2c73d28404e01d369b496b91fc1bf8116d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.119480 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators(40c12f82-6c14-4659-80c5-ab38e649706a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators(40c12f82-6c14-4659-80c5-ab38e649706a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(1ec03f041ab44d9b9c841db23656b2c73d28404e01d369b496b91fc1bf8116d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" podUID="40c12f82-6c14-4659-80c5-ab38e649706a" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.127225 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(d5aec4141a8f2acba3ad44d37332aaa413924a9ea9637932d2eb2da41a5cf283): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.127309 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(d5aec4141a8f2acba3ad44d37332aaa413924a9ea9637932d2eb2da41a5cf283): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.127330 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(d5aec4141a8f2acba3ad44d37332aaa413924a9ea9637932d2eb2da41a5cf283): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.127376 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-4l52t_openshift-operators(0c90c3e5-de84-4cb1-ac22-fe02ca708196)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-4l52t_openshift-operators(0c90c3e5-de84-4cb1-ac22-fe02ca708196)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(d5aec4141a8f2acba3ad44d37332aaa413924a9ea9637932d2eb2da41a5cf283): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" podUID="0c90c3e5-de84-4cb1-ac22-fe02ca708196" Mar 07 08:01:32 crc kubenswrapper[4761]: I0307 08:01:32.704832 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:32 crc kubenswrapper[4761]: I0307 08:01:32.705683 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:33 crc kubenswrapper[4761]: I0307 08:01:33.001178 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps"] Mar 07 08:01:33 crc kubenswrapper[4761]: I0307 08:01:33.382282 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" event={"ID":"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6","Type":"ContainerStarted","Data":"99736c3d29863053d8ee920b6026616bd58fd567e04335f75d4337d656477dec"} Mar 07 08:01:34 crc kubenswrapper[4761]: I0307 08:01:34.704702 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:34 crc kubenswrapper[4761]: I0307 08:01:34.705219 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.229410 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kfph9"] Mar 07 08:01:35 crc kubenswrapper[4761]: W0307 08:01:35.250603 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb17d76c5_b5d9_4f79_841e_287d05540b40.slice/crio-c4b29ccf39b3fecc7f38070a56aa82218dfa74ba7adeb7d12dbb5368547cdf16 WatchSource:0}: Error finding container c4b29ccf39b3fecc7f38070a56aa82218dfa74ba7adeb7d12dbb5368547cdf16: Status 404 returned error can't find the container with id c4b29ccf39b3fecc7f38070a56aa82218dfa74ba7adeb7d12dbb5368547cdf16 Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.400890 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" event={"ID":"b17d76c5-b5d9-4f79-841e-287d05540b40","Type":"ContainerStarted","Data":"c4b29ccf39b3fecc7f38070a56aa82218dfa74ba7adeb7d12dbb5368547cdf16"} Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.705332 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.705712 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.705789 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.705986 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.706009 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.706607 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:37 crc kubenswrapper[4761]: I0307 08:01:37.821916 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-4l52t"] Mar 07 08:01:37 crc kubenswrapper[4761]: I0307 08:01:37.825988 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9"] Mar 07 08:01:37 crc kubenswrapper[4761]: I0307 08:01:37.913646 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w"] Mar 07 08:01:37 crc kubenswrapper[4761]: W0307 08:01:37.926710 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60fad35f_402e_4c65_a097_a836c5692479.slice/crio-10cf3bcccf096e172adc1a6b40146dbbce40f5280373a5f892affdab4a6468ca WatchSource:0}: Error finding container 10cf3bcccf096e172adc1a6b40146dbbce40f5280373a5f892affdab4a6468ca: Status 404 returned error can't find the container with id 10cf3bcccf096e172adc1a6b40146dbbce40f5280373a5f892affdab4a6468ca Mar 07 08:01:38 crc kubenswrapper[4761]: I0307 08:01:38.421663 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" event={"ID":"0c90c3e5-de84-4cb1-ac22-fe02ca708196","Type":"ContainerStarted","Data":"3f8778ddc7c3fd679c47f6d3f1252ffc3b5ce08d498d2bae05ea671f2741e27e"} Mar 07 08:01:38 crc kubenswrapper[4761]: I0307 08:01:38.423030 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" event={"ID":"60fad35f-402e-4c65-a097-a836c5692479","Type":"ContainerStarted","Data":"ba7d301892ca104dd961cd2066a588cb5a6fd21a82b6ce044cf90485c69fde78"} Mar 07 08:01:38 crc kubenswrapper[4761]: I0307 08:01:38.423053 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" event={"ID":"60fad35f-402e-4c65-a097-a836c5692479","Type":"ContainerStarted","Data":"10cf3bcccf096e172adc1a6b40146dbbce40f5280373a5f892affdab4a6468ca"} Mar 07 08:01:38 crc kubenswrapper[4761]: I0307 08:01:38.424300 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" event={"ID":"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6","Type":"ContainerStarted","Data":"e5890693fb7fc2837518d7c5a3b4d289305240567ff375d18913638014f9b5bd"} Mar 07 08:01:38 crc kubenswrapper[4761]: I0307 08:01:38.426050 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" event={"ID":"40c12f82-6c14-4659-80c5-ab38e649706a","Type":"ContainerStarted","Data":"8af59dc2800a7dc6e3a4c30f557e6b9f1cbe1c7e29f1590225ebcd642e3b5fb6"} Mar 07 08:01:38 crc kubenswrapper[4761]: I0307 08:01:38.439118 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" podStartSLOduration=20.43910507 podStartE2EDuration="20.43910507s" podCreationTimestamp="2026-03-07 08:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:01:38.436604805 +0000 UTC m=+755.345771280" watchObservedRunningTime="2026-03-07 08:01:38.43910507 +0000 UTC m=+755.348271545" Mar 07 08:01:38 crc kubenswrapper[4761]: I0307 08:01:38.457278 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" podStartSLOduration=16.124637691 podStartE2EDuration="20.457255223s" podCreationTimestamp="2026-03-07 08:01:18 +0000 UTC" firstStartedPulling="2026-03-07 08:01:33.008805147 +0000 UTC m=+749.917971622" lastFinishedPulling="2026-03-07 08:01:37.341422679 +0000 UTC m=+754.250589154" observedRunningTime="2026-03-07 08:01:38.455016185 +0000 UTC m=+755.364182660" watchObservedRunningTime="2026-03-07 08:01:38.457255223 +0000 UTC m=+755.366421698" Mar 07 08:01:43 crc kubenswrapper[4761]: I0307 08:01:43.796781 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.473876 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" event={"ID":"40c12f82-6c14-4659-80c5-ab38e649706a","Type":"ContainerStarted","Data":"63afcdaaa1febe0d49c230b32d25385b49ca15f068434c0167c0d18a190283c4"} Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.475474 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" event={"ID":"0c90c3e5-de84-4cb1-ac22-fe02ca708196","Type":"ContainerStarted","Data":"ac3db4c7ef28467b0d6f39a4bb606c00bdc0e747480347c450e85dda34f52b96"} Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.475729 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.477570 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" event={"ID":"b17d76c5-b5d9-4f79-841e-287d05540b40","Type":"ContainerStarted","Data":"8950a99448316494a5c9b4ae0ee524caff25fe65053d41c211f9493c97df3975"} Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.477925 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.494072 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.500346 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" podStartSLOduration=20.675434597 podStartE2EDuration="26.500323812s" podCreationTimestamp="2026-03-07 08:01:18 +0000 UTC" firstStartedPulling="2026-03-07 08:01:37.845560375 +0000 UTC m=+754.754726850" lastFinishedPulling="2026-03-07 08:01:43.67044957 +0000 UTC m=+760.579616065" observedRunningTime="2026-03-07 08:01:44.493189997 +0000 UTC m=+761.402356482" watchObservedRunningTime="2026-03-07 08:01:44.500323812 +0000 UTC m=+761.409490287" Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.522450 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" podStartSLOduration=17.706903599 podStartE2EDuration="26.522420608s" podCreationTimestamp="2026-03-07 08:01:18 +0000 UTC" firstStartedPulling="2026-03-07 08:01:35.261267848 +0000 UTC m=+752.170434323" lastFinishedPulling="2026-03-07 08:01:44.076784857 +0000 UTC m=+760.985951332" observedRunningTime="2026-03-07 08:01:44.516207986 +0000 UTC m=+761.425374481" watchObservedRunningTime="2026-03-07 08:01:44.522420608 +0000 UTC m=+761.431587123" Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.537168 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" podStartSLOduration=20.713520069 podStartE2EDuration="26.537152212s" podCreationTimestamp="2026-03-07 08:01:18 +0000 UTC" firstStartedPulling="2026-03-07 08:01:37.845502873 +0000 UTC m=+754.754669348" lastFinishedPulling="2026-03-07 08:01:43.669135016 +0000 UTC m=+760.578301491" observedRunningTime="2026-03-07 08:01:44.534203175 +0000 UTC m=+761.443369650" watchObservedRunningTime="2026-03-07 08:01:44.537152212 +0000 UTC m=+761.446318687" Mar 07 08:01:49 crc kubenswrapper[4761]: I0307 08:01:49.192432 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.268470 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xg44s"] Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.272184 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.273946 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.301923 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.302444 4761 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-spg69" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.307014 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-b26zv"] Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.308319 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-b26zv" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.309935 4761 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lh4bc" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.331910 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xg44s"] Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.339225 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-b26zv"] Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.347763 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-98h6c"] Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.349331 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.351045 4761 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6rm4b" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.355737 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-98h6c"] Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.415419 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2jpd\" (UniqueName: \"kubernetes.io/projected/cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2-kube-api-access-j2jpd\") pod \"cert-manager-cainjector-cf98fcc89-xg44s\" (UID: \"cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.415511 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzcgz\" (UniqueName: \"kubernetes.io/projected/abfb0a2a-4a92-4619-9335-3b8dcdda269d-kube-api-access-pzcgz\") pod \"cert-manager-858654f9db-b26zv\" (UID: \"abfb0a2a-4a92-4619-9335-3b8dcdda269d\") " pod="cert-manager/cert-manager-858654f9db-b26zv" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.415535 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lxgp\" (UniqueName: \"kubernetes.io/projected/563c8932-7287-4158-bb9a-7f464230ae9f-kube-api-access-2lxgp\") pod \"cert-manager-webhook-687f57d79b-98h6c\" (UID: \"563c8932-7287-4158-bb9a-7f464230ae9f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.516632 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2jpd\" (UniqueName: \"kubernetes.io/projected/cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2-kube-api-access-j2jpd\") pod \"cert-manager-cainjector-cf98fcc89-xg44s\" (UID: \"cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.516755 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzcgz\" (UniqueName: \"kubernetes.io/projected/abfb0a2a-4a92-4619-9335-3b8dcdda269d-kube-api-access-pzcgz\") pod \"cert-manager-858654f9db-b26zv\" (UID: \"abfb0a2a-4a92-4619-9335-3b8dcdda269d\") " pod="cert-manager/cert-manager-858654f9db-b26zv" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.516781 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lxgp\" (UniqueName: \"kubernetes.io/projected/563c8932-7287-4158-bb9a-7f464230ae9f-kube-api-access-2lxgp\") pod \"cert-manager-webhook-687f57d79b-98h6c\" (UID: \"563c8932-7287-4158-bb9a-7f464230ae9f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.535063 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lxgp\" (UniqueName: \"kubernetes.io/projected/563c8932-7287-4158-bb9a-7f464230ae9f-kube-api-access-2lxgp\") pod \"cert-manager-webhook-687f57d79b-98h6c\" (UID: \"563c8932-7287-4158-bb9a-7f464230ae9f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.535921 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2jpd\" (UniqueName: \"kubernetes.io/projected/cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2-kube-api-access-j2jpd\") pod \"cert-manager-cainjector-cf98fcc89-xg44s\" (UID: \"cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.541034 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzcgz\" (UniqueName: \"kubernetes.io/projected/abfb0a2a-4a92-4619-9335-3b8dcdda269d-kube-api-access-pzcgz\") pod \"cert-manager-858654f9db-b26zv\" (UID: \"abfb0a2a-4a92-4619-9335-3b8dcdda269d\") " pod="cert-manager/cert-manager-858654f9db-b26zv" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.602308 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.635640 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-b26zv" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.664113 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" Mar 07 08:01:56 crc kubenswrapper[4761]: I0307 08:01:56.083236 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xg44s"] Mar 07 08:01:56 crc kubenswrapper[4761]: W0307 08:01:56.084085 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd2551ef_1dad_4b6f_bbf0_8bb114a9ebe2.slice/crio-105954f15c469a8306a8f1ba5eeec7952d5ef41fd637df46b2675f932599d5bb WatchSource:0}: Error finding container 105954f15c469a8306a8f1ba5eeec7952d5ef41fd637df46b2675f932599d5bb: Status 404 returned error can't find the container with id 105954f15c469a8306a8f1ba5eeec7952d5ef41fd637df46b2675f932599d5bb Mar 07 08:01:56 crc kubenswrapper[4761]: W0307 08:01:56.084927 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabfb0a2a_4a92_4619_9335_3b8dcdda269d.slice/crio-94ef76953da52cfc112f49075a64fb31eb89b8192dcce87568071257cbb975b3 WatchSource:0}: Error finding container 94ef76953da52cfc112f49075a64fb31eb89b8192dcce87568071257cbb975b3: Status 404 returned error can't find the container with id 94ef76953da52cfc112f49075a64fb31eb89b8192dcce87568071257cbb975b3 Mar 07 08:01:56 crc kubenswrapper[4761]: I0307 08:01:56.089375 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:01:56 crc kubenswrapper[4761]: I0307 08:01:56.089449 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-b26zv"] Mar 07 08:01:56 crc kubenswrapper[4761]: I0307 08:01:56.167401 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-98h6c"] Mar 07 08:01:56 crc kubenswrapper[4761]: W0307 08:01:56.171796 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod563c8932_7287_4158_bb9a_7f464230ae9f.slice/crio-2dfc2a21365fe82155a41c3faf1110e1b3ba84a8f4b19ae94e656679212dd644 WatchSource:0}: Error finding container 2dfc2a21365fe82155a41c3faf1110e1b3ba84a8f4b19ae94e656679212dd644: Status 404 returned error can't find the container with id 2dfc2a21365fe82155a41c3faf1110e1b3ba84a8f4b19ae94e656679212dd644 Mar 07 08:01:56 crc kubenswrapper[4761]: I0307 08:01:56.555077 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" event={"ID":"cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2","Type":"ContainerStarted","Data":"105954f15c469a8306a8f1ba5eeec7952d5ef41fd637df46b2675f932599d5bb"} Mar 07 08:01:56 crc kubenswrapper[4761]: I0307 08:01:56.556008 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" event={"ID":"563c8932-7287-4158-bb9a-7f464230ae9f","Type":"ContainerStarted","Data":"2dfc2a21365fe82155a41c3faf1110e1b3ba84a8f4b19ae94e656679212dd644"} Mar 07 08:01:56 crc kubenswrapper[4761]: I0307 08:01:56.557040 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-b26zv" event={"ID":"abfb0a2a-4a92-4619-9335-3b8dcdda269d","Type":"ContainerStarted","Data":"94ef76953da52cfc112f49075a64fb31eb89b8192dcce87568071257cbb975b3"} Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.126018 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547842-jnfnp"] Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.127214 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547842-jnfnp" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.129742 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.129793 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.131071 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.147229 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547842-jnfnp"] Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.194314 4761 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.303328 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnr2p\" (UniqueName: \"kubernetes.io/projected/2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a-kube-api-access-xnr2p\") pod \"auto-csr-approver-29547842-jnfnp\" (UID: \"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a\") " pod="openshift-infra/auto-csr-approver-29547842-jnfnp" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.405226 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnr2p\" (UniqueName: \"kubernetes.io/projected/2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a-kube-api-access-xnr2p\") pod \"auto-csr-approver-29547842-jnfnp\" (UID: \"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a\") " pod="openshift-infra/auto-csr-approver-29547842-jnfnp" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.427336 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnr2p\" (UniqueName: \"kubernetes.io/projected/2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a-kube-api-access-xnr2p\") pod \"auto-csr-approver-29547842-jnfnp\" (UID: \"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a\") " pod="openshift-infra/auto-csr-approver-29547842-jnfnp" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.451337 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547842-jnfnp" Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.608190 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" event={"ID":"cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2","Type":"ContainerStarted","Data":"0a3275190da34a1048b9b5c5e43963359feb95c4e4d9895d8f922b48e3a790aa"} Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.609614 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" event={"ID":"563c8932-7287-4158-bb9a-7f464230ae9f","Type":"ContainerStarted","Data":"991c9e35adef2aed22641e40d39d5f2968371d741c476dc3be08b60b6f7e6777"} Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.609748 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.611503 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-b26zv" event={"ID":"abfb0a2a-4a92-4619-9335-3b8dcdda269d","Type":"ContainerStarted","Data":"006faa392c05097f848d6faa686eafc08e090481fb5ff591b711cc673d62f61b"} Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.632605 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" podStartSLOduration=1.386311757 podStartE2EDuration="8.632578237s" podCreationTimestamp="2026-03-07 08:01:55 +0000 UTC" firstStartedPulling="2026-03-07 08:01:56.088956009 +0000 UTC m=+772.998122514" lastFinishedPulling="2026-03-07 08:02:03.335222519 +0000 UTC m=+780.244388994" observedRunningTime="2026-03-07 08:02:03.62347041 +0000 UTC m=+780.532636895" watchObservedRunningTime="2026-03-07 08:02:03.632578237 +0000 UTC m=+780.541744742" Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.648906 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-b26zv" podStartSLOduration=1.404253633 podStartE2EDuration="8.648866211s" podCreationTimestamp="2026-03-07 08:01:55 +0000 UTC" firstStartedPulling="2026-03-07 08:01:56.090485648 +0000 UTC m=+772.999652113" lastFinishedPulling="2026-03-07 08:02:03.335098216 +0000 UTC m=+780.244264691" observedRunningTime="2026-03-07 08:02:03.645289218 +0000 UTC m=+780.554455703" watchObservedRunningTime="2026-03-07 08:02:03.648866211 +0000 UTC m=+780.558032706" Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.671567 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" podStartSLOduration=1.515759839 podStartE2EDuration="8.671543632s" podCreationTimestamp="2026-03-07 08:01:55 +0000 UTC" firstStartedPulling="2026-03-07 08:01:56.177436174 +0000 UTC m=+773.086602659" lastFinishedPulling="2026-03-07 08:02:03.333219967 +0000 UTC m=+780.242386452" observedRunningTime="2026-03-07 08:02:03.667330552 +0000 UTC m=+780.576497017" watchObservedRunningTime="2026-03-07 08:02:03.671543632 +0000 UTC m=+780.580710107" Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.746522 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547842-jnfnp"] Mar 07 08:02:03 crc kubenswrapper[4761]: W0307 08:02:03.753556 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a6ff6ac_c09e_4e36_9b0f_3a090f30df9a.slice/crio-27dae075a44e6018a1574434ebcd11b0fa579667ea174f7ce6c40a11719a6de4 WatchSource:0}: Error finding container 27dae075a44e6018a1574434ebcd11b0fa579667ea174f7ce6c40a11719a6de4: Status 404 returned error can't find the container with id 27dae075a44e6018a1574434ebcd11b0fa579667ea174f7ce6c40a11719a6de4 Mar 07 08:02:04 crc kubenswrapper[4761]: I0307 08:02:04.622244 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547842-jnfnp" event={"ID":"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a","Type":"ContainerStarted","Data":"27dae075a44e6018a1574434ebcd11b0fa579667ea174f7ce6c40a11719a6de4"} Mar 07 08:02:05 crc kubenswrapper[4761]: I0307 08:02:05.635520 4761 generic.go:334] "Generic (PLEG): container finished" podID="2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a" containerID="09f4a34d389f4eecea1e2e246f771cea1437ac1408958e53146bc65495fe1ec0" exitCode=0 Mar 07 08:02:05 crc kubenswrapper[4761]: I0307 08:02:05.635622 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547842-jnfnp" event={"ID":"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a","Type":"ContainerDied","Data":"09f4a34d389f4eecea1e2e246f771cea1437ac1408958e53146bc65495fe1ec0"} Mar 07 08:02:06 crc kubenswrapper[4761]: I0307 08:02:06.915655 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547842-jnfnp" Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.117546 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnr2p\" (UniqueName: \"kubernetes.io/projected/2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a-kube-api-access-xnr2p\") pod \"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a\" (UID: \"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a\") " Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.127593 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a-kube-api-access-xnr2p" (OuterVolumeSpecName: "kube-api-access-xnr2p") pod "2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a" (UID: "2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a"). InnerVolumeSpecName "kube-api-access-xnr2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.219213 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnr2p\" (UniqueName: \"kubernetes.io/projected/2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a-kube-api-access-xnr2p\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.660703 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547842-jnfnp" event={"ID":"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a","Type":"ContainerDied","Data":"27dae075a44e6018a1574434ebcd11b0fa579667ea174f7ce6c40a11719a6de4"} Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.660848 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27dae075a44e6018a1574434ebcd11b0fa579667ea174f7ce6c40a11719a6de4" Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.660861 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547842-jnfnp" Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.978804 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547836-m94k2"] Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.983082 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547836-m94k2"] Mar 07 08:02:09 crc kubenswrapper[4761]: I0307 08:02:09.714581 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b65e7bf-925a-4cb6-b384-de21cbf6c795" path="/var/lib/kubelet/pods/4b65e7bf-925a-4cb6-b384-de21cbf6c795/volumes" Mar 07 08:02:10 crc kubenswrapper[4761]: I0307 08:02:10.666283 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.938205 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xnsvz"] Mar 07 08:02:18 crc kubenswrapper[4761]: E0307 08:02:18.939197 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a" containerName="oc" Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.939218 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a" containerName="oc" Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.939468 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a" containerName="oc" Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.942647 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.957409 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xnsvz"] Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.993399 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-catalog-content\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.993884 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-utilities\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.994076 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcl9w\" (UniqueName: \"kubernetes.io/projected/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-kube-api-access-rcl9w\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.095187 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcl9w\" (UniqueName: \"kubernetes.io/projected/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-kube-api-access-rcl9w\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.095268 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-catalog-content\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.095323 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-utilities\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.096083 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-catalog-content\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.096184 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-utilities\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.125074 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcl9w\" (UniqueName: \"kubernetes.io/projected/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-kube-api-access-rcl9w\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.271003 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.830292 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xnsvz"] Mar 07 08:02:19 crc kubenswrapper[4761]: W0307 08:02:19.834827 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaab4ecb_3f9c_4333_b21a_e46c75e7a8bb.slice/crio-560642b347b6305c79d67f964f0e29f361bf898a0184d474cc4994cfe1188790 WatchSource:0}: Error finding container 560642b347b6305c79d67f964f0e29f361bf898a0184d474cc4994cfe1188790: Status 404 returned error can't find the container with id 560642b347b6305c79d67f964f0e29f361bf898a0184d474cc4994cfe1188790 Mar 07 08:02:20 crc kubenswrapper[4761]: I0307 08:02:20.762788 4761 generic.go:334] "Generic (PLEG): container finished" podID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerID="3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd" exitCode=0 Mar 07 08:02:20 crc kubenswrapper[4761]: I0307 08:02:20.763034 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnsvz" event={"ID":"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb","Type":"ContainerDied","Data":"3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd"} Mar 07 08:02:20 crc kubenswrapper[4761]: I0307 08:02:20.763444 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnsvz" event={"ID":"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb","Type":"ContainerStarted","Data":"560642b347b6305c79d67f964f0e29f361bf898a0184d474cc4994cfe1188790"} Mar 07 08:02:21 crc kubenswrapper[4761]: I0307 08:02:21.775829 4761 generic.go:334] "Generic (PLEG): container finished" podID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerID="0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da" exitCode=0 Mar 07 08:02:21 crc kubenswrapper[4761]: I0307 08:02:21.775897 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnsvz" event={"ID":"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb","Type":"ContainerDied","Data":"0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da"} Mar 07 08:02:22 crc kubenswrapper[4761]: I0307 08:02:22.789919 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnsvz" event={"ID":"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb","Type":"ContainerStarted","Data":"372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106"} Mar 07 08:02:22 crc kubenswrapper[4761]: I0307 08:02:22.809123 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xnsvz" podStartSLOduration=3.394830283 podStartE2EDuration="4.809098414s" podCreationTimestamp="2026-03-07 08:02:18 +0000 UTC" firstStartedPulling="2026-03-07 08:02:20.766044109 +0000 UTC m=+797.675210584" lastFinishedPulling="2026-03-07 08:02:22.18031223 +0000 UTC m=+799.089478715" observedRunningTime="2026-03-07 08:02:22.807339078 +0000 UTC m=+799.716505563" watchObservedRunningTime="2026-03-07 08:02:22.809098414 +0000 UTC m=+799.718264929" Mar 07 08:02:23 crc kubenswrapper[4761]: I0307 08:02:23.905987 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-629jr"] Mar 07 08:02:23 crc kubenswrapper[4761]: I0307 08:02:23.908317 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:23 crc kubenswrapper[4761]: I0307 08:02:23.938888 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-629jr"] Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.089338 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-catalog-content\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.089394 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-utilities\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.089451 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsrb8\" (UniqueName: \"kubernetes.io/projected/1699d240-57d9-4d38-8497-20564d06aa7c-kube-api-access-dsrb8\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.191328 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsrb8\" (UniqueName: \"kubernetes.io/projected/1699d240-57d9-4d38-8497-20564d06aa7c-kube-api-access-dsrb8\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.191417 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-catalog-content\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.191448 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-utilities\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.191898 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-utilities\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.191911 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-catalog-content\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.210794 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsrb8\" (UniqueName: \"kubernetes.io/projected/1699d240-57d9-4d38-8497-20564d06aa7c-kube-api-access-dsrb8\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.228335 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.744024 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-629jr"] Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.805947 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-629jr" event={"ID":"1699d240-57d9-4d38-8497-20564d06aa7c","Type":"ContainerStarted","Data":"b7d12cd7491f7a48c505b6c4090d7445353ee720dc41aeb0c89655f5085125dd"} Mar 07 08:02:25 crc kubenswrapper[4761]: I0307 08:02:25.822140 4761 generic.go:334] "Generic (PLEG): container finished" podID="1699d240-57d9-4d38-8497-20564d06aa7c" containerID="11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9" exitCode=0 Mar 07 08:02:25 crc kubenswrapper[4761]: I0307 08:02:25.822304 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-629jr" event={"ID":"1699d240-57d9-4d38-8497-20564d06aa7c","Type":"ContainerDied","Data":"11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9"} Mar 07 08:02:26 crc kubenswrapper[4761]: I0307 08:02:26.835980 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-629jr" event={"ID":"1699d240-57d9-4d38-8497-20564d06aa7c","Type":"ContainerStarted","Data":"f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055"} Mar 07 08:02:27 crc kubenswrapper[4761]: I0307 08:02:27.854967 4761 generic.go:334] "Generic (PLEG): container finished" podID="1699d240-57d9-4d38-8497-20564d06aa7c" containerID="f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055" exitCode=0 Mar 07 08:02:27 crc kubenswrapper[4761]: I0307 08:02:27.855009 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-629jr" event={"ID":"1699d240-57d9-4d38-8497-20564d06aa7c","Type":"ContainerDied","Data":"f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055"} Mar 07 08:02:28 crc kubenswrapper[4761]: I0307 08:02:28.866811 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-629jr" event={"ID":"1699d240-57d9-4d38-8497-20564d06aa7c","Type":"ContainerStarted","Data":"57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7"} Mar 07 08:02:28 crc kubenswrapper[4761]: I0307 08:02:28.893684 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-629jr" podStartSLOduration=3.488807044 podStartE2EDuration="5.893648124s" podCreationTimestamp="2026-03-07 08:02:23 +0000 UTC" firstStartedPulling="2026-03-07 08:02:25.825403538 +0000 UTC m=+802.734570043" lastFinishedPulling="2026-03-07 08:02:28.230244608 +0000 UTC m=+805.139411123" observedRunningTime="2026-03-07 08:02:28.892024741 +0000 UTC m=+805.801191246" watchObservedRunningTime="2026-03-07 08:02:28.893648124 +0000 UTC m=+805.802814649" Mar 07 08:02:29 crc kubenswrapper[4761]: I0307 08:02:29.271433 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:29 crc kubenswrapper[4761]: I0307 08:02:29.271527 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:29 crc kubenswrapper[4761]: I0307 08:02:29.346087 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:29 crc kubenswrapper[4761]: I0307 08:02:29.956070 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:31 crc kubenswrapper[4761]: I0307 08:02:31.495234 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xnsvz"] Mar 07 08:02:31 crc kubenswrapper[4761]: I0307 08:02:31.889840 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xnsvz" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="registry-server" containerID="cri-o://372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106" gracePeriod=2 Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.471651 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.650903 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-catalog-content\") pod \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.650985 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-utilities\") pod \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.651186 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcl9w\" (UniqueName: \"kubernetes.io/projected/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-kube-api-access-rcl9w\") pod \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.652614 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-utilities" (OuterVolumeSpecName: "utilities") pod "aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" (UID: "aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.659404 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-kube-api-access-rcl9w" (OuterVolumeSpecName: "kube-api-access-rcl9w") pod "aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" (UID: "aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb"). InnerVolumeSpecName "kube-api-access-rcl9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.753127 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcl9w\" (UniqueName: \"kubernetes.io/projected/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-kube-api-access-rcl9w\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.753164 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.903802 4761 generic.go:334] "Generic (PLEG): container finished" podID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerID="372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106" exitCode=0 Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.903886 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnsvz" event={"ID":"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb","Type":"ContainerDied","Data":"372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106"} Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.903929 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.903978 4761 scope.go:117] "RemoveContainer" containerID="372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.903955 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnsvz" event={"ID":"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb","Type":"ContainerDied","Data":"560642b347b6305c79d67f964f0e29f361bf898a0184d474cc4994cfe1188790"} Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.938594 4761 scope.go:117] "RemoveContainer" containerID="0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.965665 4761 scope.go:117] "RemoveContainer" containerID="3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.003259 4761 scope.go:117] "RemoveContainer" containerID="372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106" Mar 07 08:02:33 crc kubenswrapper[4761]: E0307 08:02:33.003896 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106\": container with ID starting with 372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106 not found: ID does not exist" containerID="372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.004015 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106"} err="failed to get container status \"372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106\": rpc error: code = NotFound desc = could not find container \"372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106\": container with ID starting with 372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106 not found: ID does not exist" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.004063 4761 scope.go:117] "RemoveContainer" containerID="0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da" Mar 07 08:02:33 crc kubenswrapper[4761]: E0307 08:02:33.004706 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da\": container with ID starting with 0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da not found: ID does not exist" containerID="0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.004824 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da"} err="failed to get container status \"0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da\": rpc error: code = NotFound desc = could not find container \"0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da\": container with ID starting with 0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da not found: ID does not exist" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.004866 4761 scope.go:117] "RemoveContainer" containerID="3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd" Mar 07 08:02:33 crc kubenswrapper[4761]: E0307 08:02:33.005644 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd\": container with ID starting with 3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd not found: ID does not exist" containerID="3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.005691 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd"} err="failed to get container status \"3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd\": rpc error: code = NotFound desc = could not find container \"3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd\": container with ID starting with 3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd not found: ID does not exist" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.065519 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" (UID: "aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.162184 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.255890 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xnsvz"] Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.265073 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xnsvz"] Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.737440 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" path="/var/lib/kubelet/pods/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb/volumes" Mar 07 08:02:34 crc kubenswrapper[4761]: I0307 08:02:34.228864 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:34 crc kubenswrapper[4761]: I0307 08:02:34.228943 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:34 crc kubenswrapper[4761]: I0307 08:02:34.311545 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:34 crc kubenswrapper[4761]: I0307 08:02:34.987404 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:35 crc kubenswrapper[4761]: I0307 08:02:35.904539 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-629jr"] Mar 07 08:02:36 crc kubenswrapper[4761]: I0307 08:02:36.936838 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-629jr" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="registry-server" containerID="cri-o://57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7" gracePeriod=2 Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.362754 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.435448 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-utilities\") pod \"1699d240-57d9-4d38-8497-20564d06aa7c\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.435487 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsrb8\" (UniqueName: \"kubernetes.io/projected/1699d240-57d9-4d38-8497-20564d06aa7c-kube-api-access-dsrb8\") pod \"1699d240-57d9-4d38-8497-20564d06aa7c\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.435507 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-catalog-content\") pod \"1699d240-57d9-4d38-8497-20564d06aa7c\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.436315 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-utilities" (OuterVolumeSpecName: "utilities") pod "1699d240-57d9-4d38-8497-20564d06aa7c" (UID: "1699d240-57d9-4d38-8497-20564d06aa7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.441134 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1699d240-57d9-4d38-8497-20564d06aa7c-kube-api-access-dsrb8" (OuterVolumeSpecName: "kube-api-access-dsrb8") pod "1699d240-57d9-4d38-8497-20564d06aa7c" (UID: "1699d240-57d9-4d38-8497-20564d06aa7c"). InnerVolumeSpecName "kube-api-access-dsrb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.502249 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1699d240-57d9-4d38-8497-20564d06aa7c" (UID: "1699d240-57d9-4d38-8497-20564d06aa7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.537466 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.537505 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsrb8\" (UniqueName: \"kubernetes.io/projected/1699d240-57d9-4d38-8497-20564d06aa7c-kube-api-access-dsrb8\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.537519 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.547672 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m"] Mar 07 08:02:37 crc kubenswrapper[4761]: E0307 08:02:37.547997 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="registry-server" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548016 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="registry-server" Mar 07 08:02:37 crc kubenswrapper[4761]: E0307 08:02:37.548034 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="registry-server" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548041 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="registry-server" Mar 07 08:02:37 crc kubenswrapper[4761]: E0307 08:02:37.548049 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="extract-content" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548055 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="extract-content" Mar 07 08:02:37 crc kubenswrapper[4761]: E0307 08:02:37.548064 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="extract-content" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548070 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="extract-content" Mar 07 08:02:37 crc kubenswrapper[4761]: E0307 08:02:37.548084 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="extract-utilities" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548089 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="extract-utilities" Mar 07 08:02:37 crc kubenswrapper[4761]: E0307 08:02:37.548100 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="extract-utilities" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548105 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="extract-utilities" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548216 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="registry-server" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548227 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="registry-server" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.549141 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.551163 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.572030 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m"] Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.638584 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.638627 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.638659 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zktw4\" (UniqueName: \"kubernetes.io/projected/279f54bc-0f03-43b4-9b53-1952777e9b85-kube-api-access-zktw4\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.739750 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.739802 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.739849 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zktw4\" (UniqueName: \"kubernetes.io/projected/279f54bc-0f03-43b4-9b53-1952777e9b85-kube-api-access-zktw4\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.740467 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.740607 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.753004 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr"] Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.762516 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.778451 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zktw4\" (UniqueName: \"kubernetes.io/projected/279f54bc-0f03-43b4-9b53-1952777e9b85-kube-api-access-zktw4\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.783694 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr"] Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.840882 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.840921 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.840979 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgxvq\" (UniqueName: \"kubernetes.io/projected/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-kube-api-access-lgxvq\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.861236 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.942259 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.942299 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.942371 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgxvq\" (UniqueName: \"kubernetes.io/projected/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-kube-api-access-lgxvq\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.943266 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.943512 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.959397 4761 generic.go:334] "Generic (PLEG): container finished" podID="1699d240-57d9-4d38-8497-20564d06aa7c" containerID="57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7" exitCode=0 Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.959447 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-629jr" event={"ID":"1699d240-57d9-4d38-8497-20564d06aa7c","Type":"ContainerDied","Data":"57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7"} Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.959491 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-629jr" event={"ID":"1699d240-57d9-4d38-8497-20564d06aa7c","Type":"ContainerDied","Data":"b7d12cd7491f7a48c505b6c4090d7445353ee720dc41aeb0c89655f5085125dd"} Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.959512 4761 scope.go:117] "RemoveContainer" containerID="57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.959645 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.970069 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgxvq\" (UniqueName: \"kubernetes.io/projected/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-kube-api-access-lgxvq\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.989798 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-629jr"] Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.991373 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-629jr"] Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.003004 4761 scope.go:117] "RemoveContainer" containerID="f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.029224 4761 scope.go:117] "RemoveContainer" containerID="11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.081025 4761 scope.go:117] "RemoveContainer" containerID="57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7" Mar 07 08:02:38 crc kubenswrapper[4761]: E0307 08:02:38.089612 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7\": container with ID starting with 57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7 not found: ID does not exist" containerID="57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.089656 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7"} err="failed to get container status \"57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7\": rpc error: code = NotFound desc = could not find container \"57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7\": container with ID starting with 57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7 not found: ID does not exist" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.089683 4761 scope.go:117] "RemoveContainer" containerID="f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055" Mar 07 08:02:38 crc kubenswrapper[4761]: E0307 08:02:38.095267 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055\": container with ID starting with f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055 not found: ID does not exist" containerID="f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.095305 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055"} err="failed to get container status \"f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055\": rpc error: code = NotFound desc = could not find container \"f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055\": container with ID starting with f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055 not found: ID does not exist" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.095338 4761 scope.go:117] "RemoveContainer" containerID="11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9" Mar 07 08:02:38 crc kubenswrapper[4761]: E0307 08:02:38.095697 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9\": container with ID starting with 11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9 not found: ID does not exist" containerID="11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.095755 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9"} err="failed to get container status \"11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9\": rpc error: code = NotFound desc = could not find container \"11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9\": container with ID starting with 11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9 not found: ID does not exist" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.099071 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m"] Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.103983 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.499395 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr"] Mar 07 08:02:38 crc kubenswrapper[4761]: W0307 08:02:38.509495 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb6c0fb0_7486_43c4_8f84_e495d653d6fe.slice/crio-625fb0afce982fd175f0be13afe9d6d27bbefcebbc1b7327a75771f921cf23d0 WatchSource:0}: Error finding container 625fb0afce982fd175f0be13afe9d6d27bbefcebbc1b7327a75771f921cf23d0: Status 404 returned error can't find the container with id 625fb0afce982fd175f0be13afe9d6d27bbefcebbc1b7327a75771f921cf23d0 Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.966859 4761 generic.go:334] "Generic (PLEG): container finished" podID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerID="52f95a8947c7b85cd9e52c4b0ae369fd19887f29c160d770023a95737c124e13" exitCode=0 Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.966960 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" event={"ID":"279f54bc-0f03-43b4-9b53-1952777e9b85","Type":"ContainerDied","Data":"52f95a8947c7b85cd9e52c4b0ae369fd19887f29c160d770023a95737c124e13"} Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.968043 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" event={"ID":"279f54bc-0f03-43b4-9b53-1952777e9b85","Type":"ContainerStarted","Data":"98da8fa5f73f6ff701486d64315316ab354a462e510b81651c4cb35fb5a48854"} Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.969701 4761 generic.go:334] "Generic (PLEG): container finished" podID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerID="0b432e1be3e5357800509911b7b452cc1aad202715c206b2e50aa5791f7ff2c5" exitCode=0 Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.969788 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" event={"ID":"eb6c0fb0-7486-43c4-8f84-e495d653d6fe","Type":"ContainerDied","Data":"0b432e1be3e5357800509911b7b452cc1aad202715c206b2e50aa5791f7ff2c5"} Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.970066 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" event={"ID":"eb6c0fb0-7486-43c4-8f84-e495d653d6fe","Type":"ContainerStarted","Data":"625fb0afce982fd175f0be13afe9d6d27bbefcebbc1b7327a75771f921cf23d0"} Mar 07 08:02:39 crc kubenswrapper[4761]: I0307 08:02:39.714826 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" path="/var/lib/kubelet/pods/1699d240-57d9-4d38-8497-20564d06aa7c/volumes" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.508779 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9p5w7"] Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.512113 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.527188 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9p5w7"] Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.621280 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-catalog-content\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.621511 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-utilities\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.621562 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmg7d\" (UniqueName: \"kubernetes.io/projected/35b2e78a-c64c-43c9-a02c-5ae951212ba0-kube-api-access-tmg7d\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.723077 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-catalog-content\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.723195 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmg7d\" (UniqueName: \"kubernetes.io/projected/35b2e78a-c64c-43c9-a02c-5ae951212ba0-kube-api-access-tmg7d\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.723217 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-utilities\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.723682 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-utilities\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.723955 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-catalog-content\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.753590 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmg7d\" (UniqueName: \"kubernetes.io/projected/35b2e78a-c64c-43c9-a02c-5ae951212ba0-kube-api-access-tmg7d\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.838585 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:43 crc kubenswrapper[4761]: I0307 08:02:43.229751 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9p5w7"] Mar 07 08:02:43 crc kubenswrapper[4761]: W0307 08:02:43.239089 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35b2e78a_c64c_43c9_a02c_5ae951212ba0.slice/crio-1512bd7fbaba59b9fa70cb6db90ec099228548ee0d694c222ee4c1c3b48877ea WatchSource:0}: Error finding container 1512bd7fbaba59b9fa70cb6db90ec099228548ee0d694c222ee4c1c3b48877ea: Status 404 returned error can't find the container with id 1512bd7fbaba59b9fa70cb6db90ec099228548ee0d694c222ee4c1c3b48877ea Mar 07 08:02:43 crc kubenswrapper[4761]: I0307 08:02:43.676067 4761 scope.go:117] "RemoveContainer" containerID="3cb1b46082fb3b84b3f8cd834240e3995889dc15c1e02d44fee9edb19c7303c1" Mar 07 08:02:44 crc kubenswrapper[4761]: I0307 08:02:44.009184 4761 generic.go:334] "Generic (PLEG): container finished" podID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerID="855f6fb9f46a2441c157c2710c6087cbd19a1f2ea071573cff43fa958e8ea863" exitCode=0 Mar 07 08:02:44 crc kubenswrapper[4761]: I0307 08:02:44.009256 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" event={"ID":"279f54bc-0f03-43b4-9b53-1952777e9b85","Type":"ContainerDied","Data":"855f6fb9f46a2441c157c2710c6087cbd19a1f2ea071573cff43fa958e8ea863"} Mar 07 08:02:44 crc kubenswrapper[4761]: I0307 08:02:44.011048 4761 generic.go:334] "Generic (PLEG): container finished" podID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerID="b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753" exitCode=0 Mar 07 08:02:44 crc kubenswrapper[4761]: I0307 08:02:44.011102 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p5w7" event={"ID":"35b2e78a-c64c-43c9-a02c-5ae951212ba0","Type":"ContainerDied","Data":"b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753"} Mar 07 08:02:44 crc kubenswrapper[4761]: I0307 08:02:44.011123 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p5w7" event={"ID":"35b2e78a-c64c-43c9-a02c-5ae951212ba0","Type":"ContainerStarted","Data":"1512bd7fbaba59b9fa70cb6db90ec099228548ee0d694c222ee4c1c3b48877ea"} Mar 07 08:02:44 crc kubenswrapper[4761]: I0307 08:02:44.022967 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" event={"ID":"eb6c0fb0-7486-43c4-8f84-e495d653d6fe","Type":"ContainerDied","Data":"847f7b4f3b30aa78514809a2bf89007a47349995fc8dfa2d67e27dd93cad1d2a"} Mar 07 08:02:44 crc kubenswrapper[4761]: I0307 08:02:44.023848 4761 generic.go:334] "Generic (PLEG): container finished" podID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerID="847f7b4f3b30aa78514809a2bf89007a47349995fc8dfa2d67e27dd93cad1d2a" exitCode=0 Mar 07 08:02:45 crc kubenswrapper[4761]: I0307 08:02:45.041631 4761 generic.go:334] "Generic (PLEG): container finished" podID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerID="e80ea5b5c3532c691c6ab10e3b3f5e2adf5c660d7f8a5f1ed3a7ea7d190396f4" exitCode=0 Mar 07 08:02:45 crc kubenswrapper[4761]: I0307 08:02:45.041748 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" event={"ID":"279f54bc-0f03-43b4-9b53-1952777e9b85","Type":"ContainerDied","Data":"e80ea5b5c3532c691c6ab10e3b3f5e2adf5c660d7f8a5f1ed3a7ea7d190396f4"} Mar 07 08:02:45 crc kubenswrapper[4761]: I0307 08:02:45.046508 4761 generic.go:334] "Generic (PLEG): container finished" podID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerID="e478733196971e5a41147c9931671fcd26e78c73dc348af57f8acac722c65525" exitCode=0 Mar 07 08:02:45 crc kubenswrapper[4761]: I0307 08:02:45.046567 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" event={"ID":"eb6c0fb0-7486-43c4-8f84-e495d653d6fe","Type":"ContainerDied","Data":"e478733196971e5a41147c9931671fcd26e78c73dc348af57f8acac722c65525"} Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.055389 4761 generic.go:334] "Generic (PLEG): container finished" podID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerID="f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0" exitCode=0 Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.056610 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p5w7" event={"ID":"35b2e78a-c64c-43c9-a02c-5ae951212ba0","Type":"ContainerDied","Data":"f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0"} Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.360556 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.364836 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.383672 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgxvq\" (UniqueName: \"kubernetes.io/projected/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-kube-api-access-lgxvq\") pod \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.383807 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-bundle\") pod \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.383839 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-util\") pod \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.385521 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-bundle" (OuterVolumeSpecName: "bundle") pod "eb6c0fb0-7486-43c4-8f84-e495d653d6fe" (UID: "eb6c0fb0-7486-43c4-8f84-e495d653d6fe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.390597 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-kube-api-access-lgxvq" (OuterVolumeSpecName: "kube-api-access-lgxvq") pod "eb6c0fb0-7486-43c4-8f84-e495d653d6fe" (UID: "eb6c0fb0-7486-43c4-8f84-e495d653d6fe"). InnerVolumeSpecName "kube-api-access-lgxvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.405840 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-util" (OuterVolumeSpecName: "util") pod "eb6c0fb0-7486-43c4-8f84-e495d653d6fe" (UID: "eb6c0fb0-7486-43c4-8f84-e495d653d6fe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.484581 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-util\") pod \"279f54bc-0f03-43b4-9b53-1952777e9b85\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.484711 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-bundle\") pod \"279f54bc-0f03-43b4-9b53-1952777e9b85\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.484866 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zktw4\" (UniqueName: \"kubernetes.io/projected/279f54bc-0f03-43b4-9b53-1952777e9b85-kube-api-access-zktw4\") pod \"279f54bc-0f03-43b4-9b53-1952777e9b85\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.485125 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgxvq\" (UniqueName: \"kubernetes.io/projected/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-kube-api-access-lgxvq\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.485139 4761 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.485149 4761 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-util\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.485709 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-bundle" (OuterVolumeSpecName: "bundle") pod "279f54bc-0f03-43b4-9b53-1952777e9b85" (UID: "279f54bc-0f03-43b4-9b53-1952777e9b85"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.489050 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/279f54bc-0f03-43b4-9b53-1952777e9b85-kube-api-access-zktw4" (OuterVolumeSpecName: "kube-api-access-zktw4") pod "279f54bc-0f03-43b4-9b53-1952777e9b85" (UID: "279f54bc-0f03-43b4-9b53-1952777e9b85"). InnerVolumeSpecName "kube-api-access-zktw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.499157 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-util" (OuterVolumeSpecName: "util") pod "279f54bc-0f03-43b4-9b53-1952777e9b85" (UID: "279f54bc-0f03-43b4-9b53-1952777e9b85"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.586691 4761 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.586736 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zktw4\" (UniqueName: \"kubernetes.io/projected/279f54bc-0f03-43b4-9b53-1952777e9b85-kube-api-access-zktw4\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.586749 4761 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-util\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.064384 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p5w7" event={"ID":"35b2e78a-c64c-43c9-a02c-5ae951212ba0","Type":"ContainerStarted","Data":"df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4"} Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.067842 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" event={"ID":"eb6c0fb0-7486-43c4-8f84-e495d653d6fe","Type":"ContainerDied","Data":"625fb0afce982fd175f0be13afe9d6d27bbefcebbc1b7327a75771f921cf23d0"} Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.067873 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="625fb0afce982fd175f0be13afe9d6d27bbefcebbc1b7327a75771f921cf23d0" Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.067928 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.070280 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" event={"ID":"279f54bc-0f03-43b4-9b53-1952777e9b85","Type":"ContainerDied","Data":"98da8fa5f73f6ff701486d64315316ab354a462e510b81651c4cb35fb5a48854"} Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.070322 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98da8fa5f73f6ff701486d64315316ab354a462e510b81651c4cb35fb5a48854" Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.070393 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.085449 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9p5w7" podStartSLOduration=2.275411233 podStartE2EDuration="5.085429301s" podCreationTimestamp="2026-03-07 08:02:42 +0000 UTC" firstStartedPulling="2026-03-07 08:02:44.01432474 +0000 UTC m=+820.923491215" lastFinishedPulling="2026-03-07 08:02:46.824342778 +0000 UTC m=+823.733509283" observedRunningTime="2026-03-07 08:02:47.081833857 +0000 UTC m=+823.991000352" watchObservedRunningTime="2026-03-07 08:02:47.085429301 +0000 UTC m=+823.994595776" Mar 07 08:02:52 crc kubenswrapper[4761]: I0307 08:02:52.838907 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:52 crc kubenswrapper[4761]: I0307 08:02:52.839494 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:53 crc kubenswrapper[4761]: I0307 08:02:53.880031 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9p5w7" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="registry-server" probeResult="failure" output=< Mar 07 08:02:53 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:02:53 crc kubenswrapper[4761]: > Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653334 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq"] Mar 07 08:02:55 crc kubenswrapper[4761]: E0307 08:02:55.653630 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerName="util" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653644 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerName="util" Mar 07 08:02:55 crc kubenswrapper[4761]: E0307 08:02:55.653661 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerName="pull" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653669 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerName="pull" Mar 07 08:02:55 crc kubenswrapper[4761]: E0307 08:02:55.653682 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerName="util" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653690 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerName="util" Mar 07 08:02:55 crc kubenswrapper[4761]: E0307 08:02:55.653702 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerName="extract" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653710 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerName="extract" Mar 07 08:02:55 crc kubenswrapper[4761]: E0307 08:02:55.653740 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerName="pull" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653747 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerName="pull" Mar 07 08:02:55 crc kubenswrapper[4761]: E0307 08:02:55.653763 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerName="extract" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653770 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerName="extract" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653896 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerName="extract" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653923 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerName="extract" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.654684 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.658528 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-b8sr2" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.658640 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.658697 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.658904 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.662540 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.663169 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.698515 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq"] Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.723303 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.723751 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/8a7603da-0d59-431b-82c9-59c887e9f8d6-manager-config\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.723782 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-apiservice-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.723832 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-webhook-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.723867 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpdmm\" (UniqueName: \"kubernetes.io/projected/8a7603da-0d59-431b-82c9-59c887e9f8d6-kube-api-access-rpdmm\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.825677 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/8a7603da-0d59-431b-82c9-59c887e9f8d6-manager-config\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.825745 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-apiservice-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.825835 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-webhook-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.825882 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpdmm\" (UniqueName: \"kubernetes.io/projected/8a7603da-0d59-431b-82c9-59c887e9f8d6-kube-api-access-rpdmm\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.825934 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.826616 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/8a7603da-0d59-431b-82c9-59c887e9f8d6-manager-config\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.832519 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-webhook-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.833407 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.834267 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-apiservice-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.844427 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpdmm\" (UniqueName: \"kubernetes.io/projected/8a7603da-0d59-431b-82c9-59c887e9f8d6-kube-api-access-rpdmm\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.970852 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:56 crc kubenswrapper[4761]: I0307 08:02:56.221065 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq"] Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.125880 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-jzcxv"] Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.126858 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.136542 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-bmx25" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.136580 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.137137 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.152497 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-jzcxv"] Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.170417 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" event={"ID":"8a7603da-0d59-431b-82c9-59c887e9f8d6","Type":"ContainerStarted","Data":"f8e3beff569050c0c57aadd22918cd5f320a720b65279d6a5eb0137d2dc6e7be"} Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.247315 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbk6p\" (UniqueName: \"kubernetes.io/projected/e53253dc-17a2-4470-a579-410f349a1759-kube-api-access-fbk6p\") pod \"cluster-logging-operator-c769fd969-jzcxv\" (UID: \"e53253dc-17a2-4470-a579-410f349a1759\") " pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.349425 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbk6p\" (UniqueName: \"kubernetes.io/projected/e53253dc-17a2-4470-a579-410f349a1759-kube-api-access-fbk6p\") pod \"cluster-logging-operator-c769fd969-jzcxv\" (UID: \"e53253dc-17a2-4470-a579-410f349a1759\") " pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.392536 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbk6p\" (UniqueName: \"kubernetes.io/projected/e53253dc-17a2-4470-a579-410f349a1759-kube-api-access-fbk6p\") pod \"cluster-logging-operator-c769fd969-jzcxv\" (UID: \"e53253dc-17a2-4470-a579-410f349a1759\") " pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.452053 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.725359 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-jzcxv"] Mar 07 08:02:57 crc kubenswrapper[4761]: W0307 08:02:57.744611 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode53253dc_17a2_4470_a579_410f349a1759.slice/crio-8247dddf3fda50605f14c5d4a254d43b4c9966586ab47e97161391a604f39a43 WatchSource:0}: Error finding container 8247dddf3fda50605f14c5d4a254d43b4c9966586ab47e97161391a604f39a43: Status 404 returned error can't find the container with id 8247dddf3fda50605f14c5d4a254d43b4c9966586ab47e97161391a604f39a43 Mar 07 08:02:58 crc kubenswrapper[4761]: I0307 08:02:58.178740 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" event={"ID":"e53253dc-17a2-4470-a579-410f349a1759","Type":"ContainerStarted","Data":"8247dddf3fda50605f14c5d4a254d43b4c9966586ab47e97161391a604f39a43"} Mar 07 08:03:02 crc kubenswrapper[4761]: I0307 08:03:02.213112 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" event={"ID":"8a7603da-0d59-431b-82c9-59c887e9f8d6","Type":"ContainerStarted","Data":"3517be816e0f2c5d9edb7477be01e1bc04ecb675d7079511a75d3e4d093fa6bd"} Mar 07 08:03:02 crc kubenswrapper[4761]: I0307 08:03:02.885299 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:03:02 crc kubenswrapper[4761]: I0307 08:03:02.937022 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:03:03 crc kubenswrapper[4761]: I0307 08:03:03.497692 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9p5w7"] Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.233567 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9p5w7" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="registry-server" containerID="cri-o://df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4" gracePeriod=2 Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.234226 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" event={"ID":"e53253dc-17a2-4470-a579-410f349a1759","Type":"ContainerStarted","Data":"caaf5b62f39c007ac01c7c1bc6eeb31c1164c0994dd7319f45ead7e976a4488c"} Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.269936 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" podStartSLOduration=1.281107237 podStartE2EDuration="7.269907941s" podCreationTimestamp="2026-03-07 08:02:57 +0000 UTC" firstStartedPulling="2026-03-07 08:02:57.749438064 +0000 UTC m=+834.658604529" lastFinishedPulling="2026-03-07 08:03:03.738238758 +0000 UTC m=+840.647405233" observedRunningTime="2026-03-07 08:03:04.259816328 +0000 UTC m=+841.168982813" watchObservedRunningTime="2026-03-07 08:03:04.269907941 +0000 UTC m=+841.179074416" Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.652258 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.800282 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmg7d\" (UniqueName: \"kubernetes.io/projected/35b2e78a-c64c-43c9-a02c-5ae951212ba0-kube-api-access-tmg7d\") pod \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.800379 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-catalog-content\") pod \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.800442 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-utilities\") pod \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.801556 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-utilities" (OuterVolumeSpecName: "utilities") pod "35b2e78a-c64c-43c9-a02c-5ae951212ba0" (UID: "35b2e78a-c64c-43c9-a02c-5ae951212ba0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.811196 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35b2e78a-c64c-43c9-a02c-5ae951212ba0-kube-api-access-tmg7d" (OuterVolumeSpecName: "kube-api-access-tmg7d") pod "35b2e78a-c64c-43c9-a02c-5ae951212ba0" (UID: "35b2e78a-c64c-43c9-a02c-5ae951212ba0"). InnerVolumeSpecName "kube-api-access-tmg7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.905702 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.905758 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmg7d\" (UniqueName: \"kubernetes.io/projected/35b2e78a-c64c-43c9-a02c-5ae951212ba0-kube-api-access-tmg7d\") on node \"crc\" DevicePath \"\"" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.004702 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35b2e78a-c64c-43c9-a02c-5ae951212ba0" (UID: "35b2e78a-c64c-43c9-a02c-5ae951212ba0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.008226 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.244294 4761 generic.go:334] "Generic (PLEG): container finished" podID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerID="df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4" exitCode=0 Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.244330 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p5w7" event={"ID":"35b2e78a-c64c-43c9-a02c-5ae951212ba0","Type":"ContainerDied","Data":"df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4"} Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.244680 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p5w7" event={"ID":"35b2e78a-c64c-43c9-a02c-5ae951212ba0","Type":"ContainerDied","Data":"1512bd7fbaba59b9fa70cb6db90ec099228548ee0d694c222ee4c1c3b48877ea"} Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.244430 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.244810 4761 scope.go:117] "RemoveContainer" containerID="df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.274830 4761 scope.go:117] "RemoveContainer" containerID="f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.289564 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9p5w7"] Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.295695 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9p5w7"] Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.321149 4761 scope.go:117] "RemoveContainer" containerID="b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.344866 4761 scope.go:117] "RemoveContainer" containerID="df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4" Mar 07 08:03:05 crc kubenswrapper[4761]: E0307 08:03:05.345413 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4\": container with ID starting with df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4 not found: ID does not exist" containerID="df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.345528 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4"} err="failed to get container status \"df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4\": rpc error: code = NotFound desc = could not find container \"df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4\": container with ID starting with df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4 not found: ID does not exist" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.345608 4761 scope.go:117] "RemoveContainer" containerID="f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0" Mar 07 08:03:05 crc kubenswrapper[4761]: E0307 08:03:05.347078 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0\": container with ID starting with f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0 not found: ID does not exist" containerID="f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.347104 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0"} err="failed to get container status \"f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0\": rpc error: code = NotFound desc = could not find container \"f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0\": container with ID starting with f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0 not found: ID does not exist" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.347118 4761 scope.go:117] "RemoveContainer" containerID="b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753" Mar 07 08:03:05 crc kubenswrapper[4761]: E0307 08:03:05.347465 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753\": container with ID starting with b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753 not found: ID does not exist" containerID="b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.347502 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753"} err="failed to get container status \"b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753\": rpc error: code = NotFound desc = could not find container \"b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753\": container with ID starting with b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753 not found: ID does not exist" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.715340 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" path="/var/lib/kubelet/pods/35b2e78a-c64c-43c9-a02c-5ae951212ba0/volumes" Mar 07 08:03:10 crc kubenswrapper[4761]: I0307 08:03:10.281692 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" event={"ID":"8a7603da-0d59-431b-82c9-59c887e9f8d6","Type":"ContainerStarted","Data":"670cf4705d06c271d0a6e6b99dcaf059835e2fd1583c042880cf3b402103e3b0"} Mar 07 08:03:10 crc kubenswrapper[4761]: I0307 08:03:10.282323 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:03:10 crc kubenswrapper[4761]: I0307 08:03:10.285292 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:03:10 crc kubenswrapper[4761]: I0307 08:03:10.319029 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" podStartSLOduration=1.9981505149999998 podStartE2EDuration="15.319004923s" podCreationTimestamp="2026-03-07 08:02:55 +0000 UTC" firstStartedPulling="2026-03-07 08:02:56.229598302 +0000 UTC m=+833.138764777" lastFinishedPulling="2026-03-07 08:03:09.55045271 +0000 UTC m=+846.459619185" observedRunningTime="2026-03-07 08:03:10.313444519 +0000 UTC m=+847.222610994" watchObservedRunningTime="2026-03-07 08:03:10.319004923 +0000 UTC m=+847.228171398" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.912772 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Mar 07 08:03:14 crc kubenswrapper[4761]: E0307 08:03:14.913613 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="extract-content" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.913625 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="extract-content" Mar 07 08:03:14 crc kubenswrapper[4761]: E0307 08:03:14.913650 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="registry-server" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.913656 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="registry-server" Mar 07 08:03:14 crc kubenswrapper[4761]: E0307 08:03:14.913665 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="extract-utilities" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.913671 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="extract-utilities" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.913811 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="registry-server" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.914252 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.916247 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.916529 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.918107 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.965286 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\") pod \"minio\" (UID: \"83abedf4-14d9-46f8-aacc-1fcd4dcca872\") " pod="minio-dev/minio" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.965347 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr9sr\" (UniqueName: \"kubernetes.io/projected/83abedf4-14d9-46f8-aacc-1fcd4dcca872-kube-api-access-wr9sr\") pod \"minio\" (UID: \"83abedf4-14d9-46f8-aacc-1fcd4dcca872\") " pod="minio-dev/minio" Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.066550 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\") pod \"minio\" (UID: \"83abedf4-14d9-46f8-aacc-1fcd4dcca872\") " pod="minio-dev/minio" Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.066624 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr9sr\" (UniqueName: \"kubernetes.io/projected/83abedf4-14d9-46f8-aacc-1fcd4dcca872-kube-api-access-wr9sr\") pod \"minio\" (UID: \"83abedf4-14d9-46f8-aacc-1fcd4dcca872\") " pod="minio-dev/minio" Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.070741 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.070792 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\") pod \"minio\" (UID: \"83abedf4-14d9-46f8-aacc-1fcd4dcca872\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a2a228eb7a28b13b849f9ef0de0436cfd5b85a805c16fd0b59975b33f082feb9/globalmount\"" pod="minio-dev/minio" Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.094590 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr9sr\" (UniqueName: \"kubernetes.io/projected/83abedf4-14d9-46f8-aacc-1fcd4dcca872-kube-api-access-wr9sr\") pod \"minio\" (UID: \"83abedf4-14d9-46f8-aacc-1fcd4dcca872\") " pod="minio-dev/minio" Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.107560 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\") pod \"minio\" (UID: \"83abedf4-14d9-46f8-aacc-1fcd4dcca872\") " pod="minio-dev/minio" Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.238229 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.662938 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Mar 07 08:03:16 crc kubenswrapper[4761]: I0307 08:03:16.333187 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"83abedf4-14d9-46f8-aacc-1fcd4dcca872","Type":"ContainerStarted","Data":"7254093c1adecee34cfc079a2934db74ceffe7842eac2f05701d39554b8f8f89"} Mar 07 08:03:19 crc kubenswrapper[4761]: I0307 08:03:19.352985 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"83abedf4-14d9-46f8-aacc-1fcd4dcca872","Type":"ContainerStarted","Data":"276cce773f44c2316b7dc4e8214c7ae1e45b573996d0df90e136841c866ec1fb"} Mar 07 08:03:19 crc kubenswrapper[4761]: I0307 08:03:19.372299 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.25560914 podStartE2EDuration="7.372283379s" podCreationTimestamp="2026-03-07 08:03:12 +0000 UTC" firstStartedPulling="2026-03-07 08:03:15.673056826 +0000 UTC m=+852.582223301" lastFinishedPulling="2026-03-07 08:03:18.789731065 +0000 UTC m=+855.698897540" observedRunningTime="2026-03-07 08:03:19.37027536 +0000 UTC m=+856.279441835" watchObservedRunningTime="2026-03-07 08:03:19.372283379 +0000 UTC m=+856.281449854" Mar 07 08:03:24 crc kubenswrapper[4761]: I0307 08:03:24.987728 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh"] Mar 07 08:03:24 crc kubenswrapper[4761]: I0307 08:03:24.989257 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:24 crc kubenswrapper[4761]: I0307 08:03:24.991940 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Mar 07 08:03:24 crc kubenswrapper[4761]: I0307 08:03:24.992081 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Mar 07 08:03:24 crc kubenswrapper[4761]: I0307 08:03:24.992315 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Mar 07 08:03:24 crc kubenswrapper[4761]: I0307 08:03:24.992469 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Mar 07 08:03:24 crc kubenswrapper[4761]: I0307 08:03:24.992615 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-bx7rj" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.003041 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.113926 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.114003 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-config\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.114063 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.114093 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.114339 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs7xm\" (UniqueName: \"kubernetes.io/projected/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-kube-api-access-zs7xm\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.124056 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.124890 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.126327 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.127016 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.127158 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.144351 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.205203 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.206009 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.214936 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.215135 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.216087 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.216144 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbbwk\" (UniqueName: \"kubernetes.io/projected/c0d9aa49-bf5e-4663-9523-a67b07e95721-kube-api-access-zbbwk\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.216180 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.216242 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.216566 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.216912 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs7xm\" (UniqueName: \"kubernetes.io/projected/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-kube-api-access-zs7xm\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.216981 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d9aa49-bf5e-4663-9523-a67b07e95721-config\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.217036 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.217066 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.217101 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.217132 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.217162 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-config\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.217813 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.218254 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-config\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.222938 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.225571 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.258596 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs7xm\" (UniqueName: \"kubernetes.io/projected/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-kube-api-access-zs7xm\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.304627 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-6549c956bc-hqsjt"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.305645 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.306368 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.313462 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.313637 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.313745 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.314005 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.314309 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.318165 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.318205 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d9aa49-bf5e-4663-9523-a67b07e95721-config\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.318232 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.318254 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.318896 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.318918 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.318939 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.319042 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47npc\" (UniqueName: \"kubernetes.io/projected/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-kube-api-access-47npc\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.319088 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.319111 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-config\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.319147 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbbwk\" (UniqueName: \"kubernetes.io/projected/c0d9aa49-bf5e-4663-9523-a67b07e95721-kube-api-access-zbbwk\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.320302 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d9aa49-bf5e-4663-9523-a67b07e95721-config\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.322476 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.323884 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-6549c956bc-b2qfh"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.324972 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.327958 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.330840 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.352965 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-dbc5h" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.362602 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6549c956bc-hqsjt"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.363733 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbbwk\" (UniqueName: \"kubernetes.io/projected/c0d9aa49-bf5e-4663-9523-a67b07e95721-kube-api-access-zbbwk\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.365037 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.385832 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6549c956bc-b2qfh"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420214 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-rbac\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420271 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-rbac\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420304 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420338 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-config\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420385 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szmj4\" (UniqueName: \"kubernetes.io/projected/efc019b2-ac66-44ef-a1e7-cce4db209456-kube-api-access-szmj4\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420419 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420445 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-lokistack-gateway\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420510 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420532 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-lokistack-gateway\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420561 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420589 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420826 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420867 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tenants\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420895 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420926 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420950 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tls-secret\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420982 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tls-secret\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.421008 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.421037 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pzhb\" (UniqueName: \"kubernetes.io/projected/b942b317-2819-4d06-9e2a-ed257dd6e63e-kube-api-access-4pzhb\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.421114 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tenants\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.421142 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47npc\" (UniqueName: \"kubernetes.io/projected/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-kube-api-access-47npc\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.421439 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-config\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.422405 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.424770 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.426366 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.437245 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47npc\" (UniqueName: \"kubernetes.io/projected/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-kube-api-access-47npc\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.449031 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.522932 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.522986 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tenants\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523011 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523035 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tls-secret\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523067 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tls-secret\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523094 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pzhb\" (UniqueName: \"kubernetes.io/projected/b942b317-2819-4d06-9e2a-ed257dd6e63e-kube-api-access-4pzhb\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523125 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tenants\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523150 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-rbac\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523169 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-rbac\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523202 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szmj4\" (UniqueName: \"kubernetes.io/projected/efc019b2-ac66-44ef-a1e7-cce4db209456-kube-api-access-szmj4\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523226 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523246 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-lokistack-gateway\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: E0307 08:03:25.523248 4761 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Mar 07 08:03:25 crc kubenswrapper[4761]: E0307 08:03:25.523345 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tls-secret podName:b942b317-2819-4d06-9e2a-ed257dd6e63e nodeName:}" failed. No retries permitted until 2026-03-07 08:03:26.02332094 +0000 UTC m=+862.932487525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tls-secret") pod "logging-loki-gateway-6549c956bc-b2qfh" (UID: "b942b317-2819-4d06-9e2a-ed257dd6e63e") : secret "logging-loki-gateway-http" not found Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523270 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523861 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-lokistack-gateway\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523896 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523929 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: E0307 08:03:25.523248 4761 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.524286 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: E0307 08:03:25.524336 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tls-secret podName:efc019b2-ac66-44ef-a1e7-cce4db209456 nodeName:}" failed. No retries permitted until 2026-03-07 08:03:26.024314004 +0000 UTC m=+862.933480479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tls-secret") pod "logging-loki-gateway-6549c956bc-hqsjt" (UID: "efc019b2-ac66-44ef-a1e7-cce4db209456") : secret "logging-loki-gateway-http" not found Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.524706 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-rbac\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.524754 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.524957 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.525088 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.525647 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-rbac\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.525730 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-lokistack-gateway\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.527278 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-lokistack-gateway\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.527428 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.528052 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.532847 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tenants\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.539488 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tenants\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.544173 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szmj4\" (UniqueName: \"kubernetes.io/projected/efc019b2-ac66-44ef-a1e7-cce4db209456-kube-api-access-szmj4\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.545469 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pzhb\" (UniqueName: \"kubernetes.io/projected/b942b317-2819-4d06-9e2a-ed257dd6e63e-kube-api-access-4pzhb\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.587330 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.630003 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.751742 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.033467 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tls-secret\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.033538 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tls-secret\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.039218 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tls-secret\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.039779 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tls-secret\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.109050 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88"] Mar 07 08:03:26 crc kubenswrapper[4761]: W0307 08:03:26.115749 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22aee2b0_8c5f_486a_b74f_51b6452c7f8c.slice/crio-04e783086c4639b3f0276759787a98de3389cc23af93d0dde89050c0fb8775b3 WatchSource:0}: Error finding container 04e783086c4639b3f0276759787a98de3389cc23af93d0dde89050c0fb8775b3: Status 404 returned error can't find the container with id 04e783086c4639b3f0276759787a98de3389cc23af93d0dde89050c0fb8775b3 Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.126792 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.127565 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.135540 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.135693 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.149148 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.183017 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.183899 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.186017 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.186301 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.195968 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238213 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzgxw\" (UniqueName: \"kubernetes.io/projected/133e9b5e-adcc-4dd6-b762-fc29c779b70a-kube-api-access-mzgxw\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238329 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238581 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238637 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/133e9b5e-adcc-4dd6-b762-fc29c779b70a-config\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238678 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0814dfa9-2608-4e80-802b-015b02428474\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0814dfa9-2608-4e80-802b-015b02428474\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238798 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238821 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238869 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.256537 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.260129 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.263801 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.264022 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.271898 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.291628 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.298442 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.346843 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzgxw\" (UniqueName: \"kubernetes.io/projected/133e9b5e-adcc-4dd6-b762-fc29c779b70a-kube-api-access-mzgxw\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.346977 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347059 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d390fba-d423-4b88-90b2-0b291fe8e35b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347088 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zz4l\" (UniqueName: \"kubernetes.io/projected/2d390fba-d423-4b88-90b2-0b291fe8e35b-kube-api-access-7zz4l\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347210 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed3dc6dd-e534-41c2-b652-4aa0714797a0-config\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347323 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347372 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347402 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347446 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347507 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347567 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347653 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347683 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347735 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347778 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/133e9b5e-adcc-4dd6-b762-fc29c779b70a-config\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347813 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0814dfa9-2608-4e80-802b-015b02428474\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0814dfa9-2608-4e80-802b-015b02428474\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347861 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347887 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347913 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbg8w\" (UniqueName: \"kubernetes.io/projected/ed3dc6dd-e534-41c2-b652-4aa0714797a0-kube-api-access-zbg8w\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347949 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347991 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7496967a-d369-4972-b2b8-3b981e5febc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7496967a-d369-4972-b2b8-3b981e5febc9\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.348241 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.352911 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.353192 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/133e9b5e-adcc-4dd6-b762-fc29c779b70a-config\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.354623 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.354790 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.354825 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.356282 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.356291 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.356318 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0814dfa9-2608-4e80-802b-015b02428474\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0814dfa9-2608-4e80-802b-015b02428474\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b078229dc72709655dd8e44244faa66741189c378fbc698da1a7cdf2f65bce10/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.356338 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4e91185a0356fac865b6535f03688eb3e9b1661eaf84bf080593a239376f4a2f/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.360305 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzgxw\" (UniqueName: \"kubernetes.io/projected/133e9b5e-adcc-4dd6-b762-fc29c779b70a-kube-api-access-mzgxw\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.426073 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.430528 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" event={"ID":"c0d9aa49-bf5e-4663-9523-a67b07e95721","Type":"ContainerStarted","Data":"b8f3861d4de6e02e717b655e4f64a4bded8822559b4434c46bfb54e20615ae0b"} Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.430777 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0814dfa9-2608-4e80-802b-015b02428474\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0814dfa9-2608-4e80-802b-015b02428474\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.432681 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" event={"ID":"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745","Type":"ContainerStarted","Data":"338fcb99d6d5b2abf4f3629f6ccc046d2fbec2a9d5b34e2356eab4f209e2ff97"} Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.433688 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" event={"ID":"22aee2b0-8c5f-486a-b74f-51b6452c7f8c","Type":"ContainerStarted","Data":"04e783086c4639b3f0276759787a98de3389cc23af93d0dde89050c0fb8775b3"} Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.450663 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.451846 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbg8w\" (UniqueName: \"kubernetes.io/projected/ed3dc6dd-e534-41c2-b652-4aa0714797a0-kube-api-access-zbg8w\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.451934 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.451966 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7496967a-d369-4972-b2b8-3b981e5febc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7496967a-d369-4972-b2b8-3b981e5febc9\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.451997 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452030 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452061 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d390fba-d423-4b88-90b2-0b291fe8e35b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452082 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zz4l\" (UniqueName: \"kubernetes.io/projected/2d390fba-d423-4b88-90b2-0b291fe8e35b-kube-api-access-7zz4l\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452124 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed3dc6dd-e534-41c2-b652-4aa0714797a0-config\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452168 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452195 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452224 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452252 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452287 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452355 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.457168 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.459530 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.459840 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.460440 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed3dc6dd-e534-41c2-b652-4aa0714797a0-config\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.461085 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.462089 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d390fba-d423-4b88-90b2-0b291fe8e35b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.466478 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.467060 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.467503 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.468616 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.468646 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7496967a-d369-4972-b2b8-3b981e5febc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7496967a-d369-4972-b2b8-3b981e5febc9\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0dcdaaf79a2fdab350adb901e66b985d5a795ddb264ca798d1507ef596ae6e08/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.468676 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.472497 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbg8w\" (UniqueName: \"kubernetes.io/projected/ed3dc6dd-e534-41c2-b652-4aa0714797a0-kube-api-access-zbg8w\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.473472 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.473505 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/76c05f0131de725cd99f1fa483caafcf1ca18591625b99b1b596ccea4ba1d24a/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.476497 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zz4l\" (UniqueName: \"kubernetes.io/projected/2d390fba-d423-4b88-90b2-0b291fe8e35b-kube-api-access-7zz4l\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.498909 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.500340 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7496967a-d369-4972-b2b8-3b981e5febc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7496967a-d369-4972-b2b8-3b981e5febc9\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.507531 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.522915 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6549c956bc-b2qfh"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.588447 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: W0307 08:03:26.593796 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb942b317_2819_4d06_9e2a_ed257dd6e63e.slice/crio-e8196f9937a66d105d00786382e823219f235a62db35e68b391edb4be6b91619 WatchSource:0}: Error finding container e8196f9937a66d105d00786382e823219f235a62db35e68b391edb4be6b91619: Status 404 returned error can't find the container with id e8196f9937a66d105d00786382e823219f235a62db35e68b391edb4be6b91619 Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.636826 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6549c956bc-hqsjt"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.817362 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: W0307 08:03:26.831212 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded3dc6dd_e534_41c2_b652_4aa0714797a0.slice/crio-d311d9613d5eb317016e664c80380c57897fb6bc65926e3b0d74f878bb5c245e WatchSource:0}: Error finding container d311d9613d5eb317016e664c80380c57897fb6bc65926e3b0d74f878bb5c245e: Status 404 returned error can't find the container with id d311d9613d5eb317016e664c80380c57897fb6bc65926e3b0d74f878bb5c245e Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.860656 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: W0307 08:03:26.867071 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d390fba_d423_4b88_90b2_0b291fe8e35b.slice/crio-0c07668da8a636bb439558941ee45c0454ba7d36d7684e45cadecea8cab62031 WatchSource:0}: Error finding container 0c07668da8a636bb439558941ee45c0454ba7d36d7684e45cadecea8cab62031: Status 404 returned error can't find the container with id 0c07668da8a636bb439558941ee45c0454ba7d36d7684e45cadecea8cab62031 Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.958609 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: W0307 08:03:26.962619 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod133e9b5e_adcc_4dd6_b762_fc29c779b70a.slice/crio-6b693e4ea2e8bea50bf38c8097be3b565305f80d743ee0a7a5f5d2cae1e38538 WatchSource:0}: Error finding container 6b693e4ea2e8bea50bf38c8097be3b565305f80d743ee0a7a5f5d2cae1e38538: Status 404 returned error can't find the container with id 6b693e4ea2e8bea50bf38c8097be3b565305f80d743ee0a7a5f5d2cae1e38538 Mar 07 08:03:27 crc kubenswrapper[4761]: I0307 08:03:27.443308 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"ed3dc6dd-e534-41c2-b652-4aa0714797a0","Type":"ContainerStarted","Data":"d311d9613d5eb317016e664c80380c57897fb6bc65926e3b0d74f878bb5c245e"} Mar 07 08:03:27 crc kubenswrapper[4761]: I0307 08:03:27.446969 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"133e9b5e-adcc-4dd6-b762-fc29c779b70a","Type":"ContainerStarted","Data":"6b693e4ea2e8bea50bf38c8097be3b565305f80d743ee0a7a5f5d2cae1e38538"} Mar 07 08:03:27 crc kubenswrapper[4761]: I0307 08:03:27.448454 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" event={"ID":"efc019b2-ac66-44ef-a1e7-cce4db209456","Type":"ContainerStarted","Data":"a6ec32d3d0e8855f9d7e2d53329694364c0b3e1bbe7fc8462acdefd039db27f5"} Mar 07 08:03:27 crc kubenswrapper[4761]: I0307 08:03:27.467616 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" event={"ID":"b942b317-2819-4d06-9e2a-ed257dd6e63e","Type":"ContainerStarted","Data":"e8196f9937a66d105d00786382e823219f235a62db35e68b391edb4be6b91619"} Mar 07 08:03:27 crc kubenswrapper[4761]: I0307 08:03:27.470159 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"2d390fba-d423-4b88-90b2-0b291fe8e35b","Type":"ContainerStarted","Data":"0c07668da8a636bb439558941ee45c0454ba7d36d7684e45cadecea8cab62031"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.499353 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"133e9b5e-adcc-4dd6-b762-fc29c779b70a","Type":"ContainerStarted","Data":"80db57990d6232e4c7a8c00914e2f4a9d6363647795e33f6d1ef43650cf7d54d"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.499908 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.500948 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" event={"ID":"efc019b2-ac66-44ef-a1e7-cce4db209456","Type":"ContainerStarted","Data":"63e96a890d5f20cbf6e41105ceb9d4c9a8e9ce86f9362ed6c1d8e19b72e75ede"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.502251 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" event={"ID":"b942b317-2819-4d06-9e2a-ed257dd6e63e","Type":"ContainerStarted","Data":"02a779367a36fb8390abe437f41b88d030c432640ac4323ad693f6793bb56209"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.503979 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" event={"ID":"c0d9aa49-bf5e-4663-9523-a67b07e95721","Type":"ContainerStarted","Data":"8c0d27b679f3b180244232c0b2c0e0bd4ebb0e771119f2327b28a573a8cf2c9c"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.504184 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.506575 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" event={"ID":"22aee2b0-8c5f-486a-b74f-51b6452c7f8c","Type":"ContainerStarted","Data":"a5e15fef3385b33810272d003abc6a28c1387351b036ddb206ebc40bbecfb497"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.507143 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.509128 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"2d390fba-d423-4b88-90b2-0b291fe8e35b","Type":"ContainerStarted","Data":"08699fdaeead8b45ab6e2c6128c26caad2118cf0bb2fab1e9be13e4bcaf3aace"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.509674 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.527534 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"ed3dc6dd-e534-41c2-b652-4aa0714797a0","Type":"ContainerStarted","Data":"4534d803332429727c7c547534b2c580196e10eed6db66321027d6b1c7e39d29"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.528430 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.530342 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" event={"ID":"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745","Type":"ContainerStarted","Data":"3c09c25c35cf7bba5b3fa0ea4f6f376fd9225af51beccc1be420fdfe42919d4a"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.530870 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.545122 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.106134612 podStartE2EDuration="5.545104091s" podCreationTimestamp="2026-03-07 08:03:25 +0000 UTC" firstStartedPulling="2026-03-07 08:03:26.964397339 +0000 UTC m=+863.873563814" lastFinishedPulling="2026-03-07 08:03:29.403366818 +0000 UTC m=+866.312533293" observedRunningTime="2026-03-07 08:03:30.537939448 +0000 UTC m=+867.447105933" watchObservedRunningTime="2026-03-07 08:03:30.545104091 +0000 UTC m=+867.454270566" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.557625 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" podStartSLOduration=2.300636706 podStartE2EDuration="5.557611653s" podCreationTimestamp="2026-03-07 08:03:25 +0000 UTC" firstStartedPulling="2026-03-07 08:03:26.11815715 +0000 UTC m=+863.027323625" lastFinishedPulling="2026-03-07 08:03:29.375132057 +0000 UTC m=+866.284298572" observedRunningTime="2026-03-07 08:03:30.556511667 +0000 UTC m=+867.465678142" watchObservedRunningTime="2026-03-07 08:03:30.557611653 +0000 UTC m=+867.466778128" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.581564 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.042435854 podStartE2EDuration="5.581541801s" podCreationTimestamp="2026-03-07 08:03:25 +0000 UTC" firstStartedPulling="2026-03-07 08:03:26.833496799 +0000 UTC m=+863.742663274" lastFinishedPulling="2026-03-07 08:03:29.372602736 +0000 UTC m=+866.281769221" observedRunningTime="2026-03-07 08:03:30.575336141 +0000 UTC m=+867.484502636" watchObservedRunningTime="2026-03-07 08:03:30.581541801 +0000 UTC m=+867.490708296" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.606277 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" podStartSLOduration=2.889001119 podStartE2EDuration="6.606250737s" podCreationTimestamp="2026-03-07 08:03:24 +0000 UTC" firstStartedPulling="2026-03-07 08:03:25.640481388 +0000 UTC m=+862.549647853" lastFinishedPulling="2026-03-07 08:03:29.357730986 +0000 UTC m=+866.266897471" observedRunningTime="2026-03-07 08:03:30.599790831 +0000 UTC m=+867.508957326" watchObservedRunningTime="2026-03-07 08:03:30.606250737 +0000 UTC m=+867.515417232" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.628070 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.168702163 podStartE2EDuration="5.628047854s" podCreationTimestamp="2026-03-07 08:03:25 +0000 UTC" firstStartedPulling="2026-03-07 08:03:26.869807316 +0000 UTC m=+863.778973791" lastFinishedPulling="2026-03-07 08:03:29.329153007 +0000 UTC m=+866.238319482" observedRunningTime="2026-03-07 08:03:30.616073114 +0000 UTC m=+867.525239629" watchObservedRunningTime="2026-03-07 08:03:30.628047854 +0000 UTC m=+867.537214339" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.640537 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" podStartSLOduration=2.062461036 podStartE2EDuration="5.640514194s" podCreationTimestamp="2026-03-07 08:03:25 +0000 UTC" firstStartedPulling="2026-03-07 08:03:25.782021755 +0000 UTC m=+862.691188240" lastFinishedPulling="2026-03-07 08:03:29.360074903 +0000 UTC m=+866.269241398" observedRunningTime="2026-03-07 08:03:30.636953039 +0000 UTC m=+867.546119514" watchObservedRunningTime="2026-03-07 08:03:30.640514194 +0000 UTC m=+867.549680669" Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.580198 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" event={"ID":"efc019b2-ac66-44ef-a1e7-cce4db209456","Type":"ContainerStarted","Data":"147ca13d46937c9ce38261c5a527ad548f1fa8f6148a34758821fdee7d5e24e5"} Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.580982 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.586919 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" event={"ID":"b942b317-2819-4d06-9e2a-ed257dd6e63e","Type":"ContainerStarted","Data":"8e72bba4c7e4ae9e54ae02b0d96ff72f5f0ddb8017973189bd232bfa28b4fa6c"} Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.587682 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.588102 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.598916 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.604176 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.609351 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.628397 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podStartSLOduration=1.987698302 podStartE2EDuration="10.628366365s" podCreationTimestamp="2026-03-07 08:03:25 +0000 UTC" firstStartedPulling="2026-03-07 08:03:26.670772991 +0000 UTC m=+863.579939466" lastFinishedPulling="2026-03-07 08:03:35.311441014 +0000 UTC m=+872.220607529" observedRunningTime="2026-03-07 08:03:35.612240426 +0000 UTC m=+872.521406961" watchObservedRunningTime="2026-03-07 08:03:35.628366365 +0000 UTC m=+872.537532870" Mar 07 08:03:36 crc kubenswrapper[4761]: I0307 08:03:36.293382 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:36 crc kubenswrapper[4761]: I0307 08:03:36.305365 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:36 crc kubenswrapper[4761]: I0307 08:03:36.334048 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podStartSLOduration=2.637202072 podStartE2EDuration="11.334024021s" podCreationTimestamp="2026-03-07 08:03:25 +0000 UTC" firstStartedPulling="2026-03-07 08:03:26.608348394 +0000 UTC m=+863.517514869" lastFinishedPulling="2026-03-07 08:03:35.305170343 +0000 UTC m=+872.214336818" observedRunningTime="2026-03-07 08:03:35.737438658 +0000 UTC m=+872.646605133" watchObservedRunningTime="2026-03-07 08:03:36.334024021 +0000 UTC m=+873.243190506" Mar 07 08:03:43 crc kubenswrapper[4761]: I0307 08:03:43.768615 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:03:43 crc kubenswrapper[4761]: I0307 08:03:43.769826 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:03:45 crc kubenswrapper[4761]: I0307 08:03:45.318833 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:45 crc kubenswrapper[4761]: I0307 08:03:45.456919 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:45 crc kubenswrapper[4761]: I0307 08:03:45.595917 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:46 crc kubenswrapper[4761]: I0307 08:03:46.457992 4761 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 07 08:03:46 crc kubenswrapper[4761]: I0307 08:03:46.458382 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="133e9b5e-adcc-4dd6-b762-fc29c779b70a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 07 08:03:46 crc kubenswrapper[4761]: I0307 08:03:46.514593 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:46 crc kubenswrapper[4761]: I0307 08:03:46.593130 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:56 crc kubenswrapper[4761]: I0307 08:03:56.457867 4761 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 07 08:03:56 crc kubenswrapper[4761]: I0307 08:03:56.458908 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="133e9b5e-adcc-4dd6-b762-fc29c779b70a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.149980 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547844-4dg2j"] Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.151533 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547844-4dg2j" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.154894 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.155289 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.155401 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.169626 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547844-4dg2j"] Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.239658 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvnsv\" (UniqueName: \"kubernetes.io/projected/a2ec016f-1c81-4af0-8f87-99481163f94c-kube-api-access-dvnsv\") pod \"auto-csr-approver-29547844-4dg2j\" (UID: \"a2ec016f-1c81-4af0-8f87-99481163f94c\") " pod="openshift-infra/auto-csr-approver-29547844-4dg2j" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.341839 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvnsv\" (UniqueName: \"kubernetes.io/projected/a2ec016f-1c81-4af0-8f87-99481163f94c-kube-api-access-dvnsv\") pod \"auto-csr-approver-29547844-4dg2j\" (UID: \"a2ec016f-1c81-4af0-8f87-99481163f94c\") " pod="openshift-infra/auto-csr-approver-29547844-4dg2j" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.373078 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvnsv\" (UniqueName: \"kubernetes.io/projected/a2ec016f-1c81-4af0-8f87-99481163f94c-kube-api-access-dvnsv\") pod \"auto-csr-approver-29547844-4dg2j\" (UID: \"a2ec016f-1c81-4af0-8f87-99481163f94c\") " pod="openshift-infra/auto-csr-approver-29547844-4dg2j" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.489223 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547844-4dg2j" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.905808 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547844-4dg2j"] Mar 07 08:04:01 crc kubenswrapper[4761]: I0307 08:04:01.796296 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547844-4dg2j" event={"ID":"a2ec016f-1c81-4af0-8f87-99481163f94c","Type":"ContainerStarted","Data":"587028cc076be5208aee86f59353ca92c0e912ed272d5d739c5185e5d18e632e"} Mar 07 08:04:02 crc kubenswrapper[4761]: I0307 08:04:02.817690 4761 generic.go:334] "Generic (PLEG): container finished" podID="a2ec016f-1c81-4af0-8f87-99481163f94c" containerID="25b083a88820e0eed141fdd41201c4a079e70c052e7ce84fcef6154729306ab1" exitCode=0 Mar 07 08:04:02 crc kubenswrapper[4761]: I0307 08:04:02.817977 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547844-4dg2j" event={"ID":"a2ec016f-1c81-4af0-8f87-99481163f94c","Type":"ContainerDied","Data":"25b083a88820e0eed141fdd41201c4a079e70c052e7ce84fcef6154729306ab1"} Mar 07 08:04:04 crc kubenswrapper[4761]: I0307 08:04:04.223301 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547844-4dg2j" Mar 07 08:04:04 crc kubenswrapper[4761]: I0307 08:04:04.318320 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvnsv\" (UniqueName: \"kubernetes.io/projected/a2ec016f-1c81-4af0-8f87-99481163f94c-kube-api-access-dvnsv\") pod \"a2ec016f-1c81-4af0-8f87-99481163f94c\" (UID: \"a2ec016f-1c81-4af0-8f87-99481163f94c\") " Mar 07 08:04:04 crc kubenswrapper[4761]: I0307 08:04:04.323461 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ec016f-1c81-4af0-8f87-99481163f94c-kube-api-access-dvnsv" (OuterVolumeSpecName: "kube-api-access-dvnsv") pod "a2ec016f-1c81-4af0-8f87-99481163f94c" (UID: "a2ec016f-1c81-4af0-8f87-99481163f94c"). InnerVolumeSpecName "kube-api-access-dvnsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:04:04 crc kubenswrapper[4761]: I0307 08:04:04.420385 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvnsv\" (UniqueName: \"kubernetes.io/projected/a2ec016f-1c81-4af0-8f87-99481163f94c-kube-api-access-dvnsv\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:04 crc kubenswrapper[4761]: I0307 08:04:04.838858 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547844-4dg2j" event={"ID":"a2ec016f-1c81-4af0-8f87-99481163f94c","Type":"ContainerDied","Data":"587028cc076be5208aee86f59353ca92c0e912ed272d5d739c5185e5d18e632e"} Mar 07 08:04:04 crc kubenswrapper[4761]: I0307 08:04:04.838918 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="587028cc076be5208aee86f59353ca92c0e912ed272d5d739c5185e5d18e632e" Mar 07 08:04:04 crc kubenswrapper[4761]: I0307 08:04:04.838947 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547844-4dg2j" Mar 07 08:04:05 crc kubenswrapper[4761]: I0307 08:04:05.136156 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547838-mpzrk"] Mar 07 08:04:05 crc kubenswrapper[4761]: I0307 08:04:05.141271 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547838-mpzrk"] Mar 07 08:04:05 crc kubenswrapper[4761]: I0307 08:04:05.716923 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874f3622-b314-4b99-b663-e7b63dad53f6" path="/var/lib/kubelet/pods/874f3622-b314-4b99-b663-e7b63dad53f6/volumes" Mar 07 08:04:06 crc kubenswrapper[4761]: I0307 08:04:06.458658 4761 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 07 08:04:06 crc kubenswrapper[4761]: I0307 08:04:06.458761 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="133e9b5e-adcc-4dd6-b762-fc29c779b70a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 07 08:04:13 crc kubenswrapper[4761]: I0307 08:04:13.769220 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:04:13 crc kubenswrapper[4761]: I0307 08:04:13.770554 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:04:16 crc kubenswrapper[4761]: I0307 08:04:16.456702 4761 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 07 08:04:16 crc kubenswrapper[4761]: I0307 08:04:16.457539 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="133e9b5e-adcc-4dd6-b762-fc29c779b70a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 07 08:04:26 crc kubenswrapper[4761]: I0307 08:04:26.457696 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:04:43 crc kubenswrapper[4761]: I0307 08:04:43.768834 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:04:43 crc kubenswrapper[4761]: I0307 08:04:43.769396 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:04:43 crc kubenswrapper[4761]: I0307 08:04:43.769444 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:04:43 crc kubenswrapper[4761]: I0307 08:04:43.770186 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1d761b7f5e7692b9893671098d197b8b035ee46f61a8e0511bcc06bc73f8c8f"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:04:43 crc kubenswrapper[4761]: I0307 08:04:43.770246 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://c1d761b7f5e7692b9893671098d197b8b035ee46f61a8e0511bcc06bc73f8c8f" gracePeriod=600 Mar 07 08:04:43 crc kubenswrapper[4761]: I0307 08:04:43.802161 4761 scope.go:117] "RemoveContainer" containerID="7a2a5869acc50549f2b35140d3c5e4a51520531a922a419e98ea8062338830e2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.191947 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="c1d761b7f5e7692b9893671098d197b8b035ee46f61a8e0511bcc06bc73f8c8f" exitCode=0 Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.192039 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"c1d761b7f5e7692b9893671098d197b8b035ee46f61a8e0511bcc06bc73f8c8f"} Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.192480 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"aca69e929765f604d6be340ee9bf2395b19b14b626bf0c5263eb403497f029cf"} Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.192553 4761 scope.go:117] "RemoveContainer" containerID="4e56717fa60308e8f622ec33776708c4b00d9ccd7a8ad0a18a994be6b41d32a1" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.514045 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-rdxm2"] Mar 07 08:04:44 crc kubenswrapper[4761]: E0307 08:04:44.514363 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ec016f-1c81-4af0-8f87-99481163f94c" containerName="oc" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.514377 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ec016f-1c81-4af0-8f87-99481163f94c" containerName="oc" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.514567 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2ec016f-1c81-4af0-8f87-99481163f94c" containerName="oc" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.515155 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.519833 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.520579 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.520796 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-cfg8f" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.524475 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.525421 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-rdxm2"] Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.526552 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.527673 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.594879 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-rdxm2"] Mar 07 08:04:44 crc kubenswrapper[4761]: E0307 08:04:44.595487 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-fpg7q metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-fpg7q metrics sa-token tmp trusted-ca]: context canceled" pod="openshift-logging/collector-rdxm2" podUID="2d6f8cde-e806-4618-8f59-ec0f2b6e677c" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654398 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-datadir\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654446 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-syslog-receiver\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654464 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-sa-token\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654491 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-tmp\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654506 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpg7q\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-kube-api-access-fpg7q\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654600 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-entrypoint\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654651 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config-openshift-service-cacrt\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654917 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654979 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-trusted-ca\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.655006 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-token\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.655024 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.756797 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-entrypoint\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.756861 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config-openshift-service-cacrt\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.756911 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.756944 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-trusted-ca\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.756970 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-token\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.756992 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: E0307 08:04:44.757052 4761 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.757078 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-datadir\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: E0307 08:04:44.757124 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics podName:2d6f8cde-e806-4618-8f59-ec0f2b6e677c nodeName:}" failed. No retries permitted until 2026-03-07 08:04:45.257100215 +0000 UTC m=+942.166266700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics") pod "collector-rdxm2" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c") : secret "collector-metrics" not found Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.757137 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-datadir\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.757148 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-syslog-receiver\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.757178 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-sa-token\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.757230 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-tmp\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.757251 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpg7q\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-kube-api-access-fpg7q\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.757994 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-entrypoint\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.758033 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config-openshift-service-cacrt\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.758049 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-trusted-ca\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.758527 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.763126 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-tmp\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.763618 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-syslog-receiver\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.773868 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-token\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.774011 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-sa-token\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.775855 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpg7q\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-kube-api-access-fpg7q\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.218370 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rdxm2" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.237953 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rdxm2" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.265109 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.270085 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.366548 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-sa-token\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.366914 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-token\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.366991 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-syslog-receiver\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367018 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-entrypoint\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367054 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367074 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-tmp\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367109 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpg7q\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-kube-api-access-fpg7q\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367190 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367218 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-trusted-ca\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367249 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-datadir\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367289 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config-openshift-service-cacrt\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367508 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-datadir" (OuterVolumeSpecName: "datadir") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367555 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367825 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config" (OuterVolumeSpecName: "config") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367835 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367902 4761 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-entrypoint\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367926 4761 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-datadir\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367945 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.370023 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-tmp" (OuterVolumeSpecName: "tmp") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.370449 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-sa-token" (OuterVolumeSpecName: "sa-token") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.370568 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics" (OuterVolumeSpecName: "metrics") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.370905 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.371087 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-token" (OuterVolumeSpecName: "collector-token") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.372723 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-kube-api-access-fpg7q" (OuterVolumeSpecName: "kube-api-access-fpg7q") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "kube-api-access-fpg7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469631 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469663 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469675 4761 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469684 4761 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-sa-token\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469693 4761 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-token\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469702 4761 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469710 4761 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469736 4761 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-tmp\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469747 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpg7q\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-kube-api-access-fpg7q\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.225084 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rdxm2" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.277388 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-rdxm2"] Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.284694 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-rdxm2"] Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.290979 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-ntd8l"] Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.292181 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.299983 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ntd8l"] Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.301066 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.301199 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.301423 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.301604 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-cfg8f" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.301973 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.309445 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381405 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-config-openshift-service-cacrt\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381457 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/9756514d-4338-4ae3-bf64-4498bb1b8f88-sa-token\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381482 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-config\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381512 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49mtq\" (UniqueName: \"kubernetes.io/projected/9756514d-4338-4ae3-bf64-4498bb1b8f88-kube-api-access-49mtq\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381538 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9756514d-4338-4ae3-bf64-4498bb1b8f88-tmp\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381562 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/9756514d-4338-4ae3-bf64-4498bb1b8f88-datadir\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381592 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-collector-token\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381615 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-entrypoint\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381637 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-trusted-ca\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381728 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-collector-syslog-receiver\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381780 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-metrics\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483459 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-collector-syslog-receiver\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483528 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-metrics\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-config-openshift-service-cacrt\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483585 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/9756514d-4338-4ae3-bf64-4498bb1b8f88-sa-token\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483605 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-config\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483628 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49mtq\" (UniqueName: \"kubernetes.io/projected/9756514d-4338-4ae3-bf64-4498bb1b8f88-kube-api-access-49mtq\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483650 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9756514d-4338-4ae3-bf64-4498bb1b8f88-tmp\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483674 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/9756514d-4338-4ae3-bf64-4498bb1b8f88-datadir\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483701 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-collector-token\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483741 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-entrypoint\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483763 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-trusted-ca\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.484346 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/9756514d-4338-4ae3-bf64-4498bb1b8f88-datadir\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.484869 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-trusted-ca\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.485203 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-entrypoint\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.485249 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-config-openshift-service-cacrt\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.486145 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-config\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.489836 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-metrics\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.490180 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9756514d-4338-4ae3-bf64-4498bb1b8f88-tmp\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.490753 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-collector-token\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.493378 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-collector-syslog-receiver\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.509241 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/9756514d-4338-4ae3-bf64-4498bb1b8f88-sa-token\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.516011 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49mtq\" (UniqueName: \"kubernetes.io/projected/9756514d-4338-4ae3-bf64-4498bb1b8f88-kube-api-access-49mtq\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.622126 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ntd8l" Mar 07 08:04:47 crc kubenswrapper[4761]: I0307 08:04:47.098801 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ntd8l"] Mar 07 08:04:47 crc kubenswrapper[4761]: I0307 08:04:47.234222 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ntd8l" event={"ID":"9756514d-4338-4ae3-bf64-4498bb1b8f88","Type":"ContainerStarted","Data":"1886a9925a17f61f74036c5f869efcf3a4bd0c8aa7a4f25ca6336822a4507f9e"} Mar 07 08:04:47 crc kubenswrapper[4761]: I0307 08:04:47.718593 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d6f8cde-e806-4618-8f59-ec0f2b6e677c" path="/var/lib/kubelet/pods/2d6f8cde-e806-4618-8f59-ec0f2b6e677c/volumes" Mar 07 08:04:53 crc kubenswrapper[4761]: I0307 08:04:53.280428 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ntd8l" event={"ID":"9756514d-4338-4ae3-bf64-4498bb1b8f88","Type":"ContainerStarted","Data":"eca109be51263794c75aac87c76a76e66d365bf8ad40e26caec200c1e1a7d170"} Mar 07 08:04:53 crc kubenswrapper[4761]: I0307 08:04:53.338855 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-ntd8l" podStartSLOduration=1.531873281 podStartE2EDuration="7.338827977s" podCreationTimestamp="2026-03-07 08:04:46 +0000 UTC" firstStartedPulling="2026-03-07 08:04:47.113806948 +0000 UTC m=+944.022973433" lastFinishedPulling="2026-03-07 08:04:52.920761654 +0000 UTC m=+949.829928129" observedRunningTime="2026-03-07 08:04:53.319359637 +0000 UTC m=+950.228526162" watchObservedRunningTime="2026-03-07 08:04:53.338827977 +0000 UTC m=+950.247994492" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.326387 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xsdvs"] Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.328181 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.334541 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsdvs"] Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.428880 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-catalog-content\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.429078 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-utilities\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.429682 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j25sl\" (UniqueName: \"kubernetes.io/projected/c8ba81bf-9e75-4740-95f6-01b2846b54db-kube-api-access-j25sl\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.531809 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-utilities\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.532091 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j25sl\" (UniqueName: \"kubernetes.io/projected/c8ba81bf-9e75-4740-95f6-01b2846b54db-kube-api-access-j25sl\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.532253 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-catalog-content\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.532357 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-utilities\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.532510 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-catalog-content\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.563272 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j25sl\" (UniqueName: \"kubernetes.io/projected/c8ba81bf-9e75-4740-95f6-01b2846b54db-kube-api-access-j25sl\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.650671 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:55 crc kubenswrapper[4761]: I0307 08:04:55.090111 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsdvs"] Mar 07 08:04:55 crc kubenswrapper[4761]: W0307 08:04:55.094624 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8ba81bf_9e75_4740_95f6_01b2846b54db.slice/crio-7edb344ffb9302370aa45ab12046fbf98672eec571496cc0abbfb65fc948f49e WatchSource:0}: Error finding container 7edb344ffb9302370aa45ab12046fbf98672eec571496cc0abbfb65fc948f49e: Status 404 returned error can't find the container with id 7edb344ffb9302370aa45ab12046fbf98672eec571496cc0abbfb65fc948f49e Mar 07 08:04:55 crc kubenswrapper[4761]: I0307 08:04:55.295620 4761 generic.go:334] "Generic (PLEG): container finished" podID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerID="62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611" exitCode=0 Mar 07 08:04:55 crc kubenswrapper[4761]: I0307 08:04:55.295658 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsdvs" event={"ID":"c8ba81bf-9e75-4740-95f6-01b2846b54db","Type":"ContainerDied","Data":"62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611"} Mar 07 08:04:55 crc kubenswrapper[4761]: I0307 08:04:55.295682 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsdvs" event={"ID":"c8ba81bf-9e75-4740-95f6-01b2846b54db","Type":"ContainerStarted","Data":"7edb344ffb9302370aa45ab12046fbf98672eec571496cc0abbfb65fc948f49e"} Mar 07 08:04:56 crc kubenswrapper[4761]: I0307 08:04:56.303933 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsdvs" event={"ID":"c8ba81bf-9e75-4740-95f6-01b2846b54db","Type":"ContainerStarted","Data":"7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a"} Mar 07 08:04:57 crc kubenswrapper[4761]: I0307 08:04:57.314032 4761 generic.go:334] "Generic (PLEG): container finished" podID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerID="7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a" exitCode=0 Mar 07 08:04:57 crc kubenswrapper[4761]: I0307 08:04:57.314123 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsdvs" event={"ID":"c8ba81bf-9e75-4740-95f6-01b2846b54db","Type":"ContainerDied","Data":"7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a"} Mar 07 08:04:58 crc kubenswrapper[4761]: I0307 08:04:58.324458 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsdvs" event={"ID":"c8ba81bf-9e75-4740-95f6-01b2846b54db","Type":"ContainerStarted","Data":"e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e"} Mar 07 08:04:58 crc kubenswrapper[4761]: I0307 08:04:58.345158 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xsdvs" podStartSLOduration=1.917633632 podStartE2EDuration="4.345141183s" podCreationTimestamp="2026-03-07 08:04:54 +0000 UTC" firstStartedPulling="2026-03-07 08:04:55.29747052 +0000 UTC m=+952.206636995" lastFinishedPulling="2026-03-07 08:04:57.724978061 +0000 UTC m=+954.634144546" observedRunningTime="2026-03-07 08:04:58.342328715 +0000 UTC m=+955.251495220" watchObservedRunningTime="2026-03-07 08:04:58.345141183 +0000 UTC m=+955.254307678" Mar 07 08:05:04 crc kubenswrapper[4761]: I0307 08:05:04.651798 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:05:04 crc kubenswrapper[4761]: I0307 08:05:04.653565 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:05:04 crc kubenswrapper[4761]: I0307 08:05:04.714358 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:05:05 crc kubenswrapper[4761]: I0307 08:05:05.448352 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:05:05 crc kubenswrapper[4761]: I0307 08:05:05.514639 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsdvs"] Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.396602 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xsdvs" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="registry-server" containerID="cri-o://e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e" gracePeriod=2 Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.787453 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.864425 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j25sl\" (UniqueName: \"kubernetes.io/projected/c8ba81bf-9e75-4740-95f6-01b2846b54db-kube-api-access-j25sl\") pod \"c8ba81bf-9e75-4740-95f6-01b2846b54db\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.864558 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-utilities\") pod \"c8ba81bf-9e75-4740-95f6-01b2846b54db\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.864615 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-catalog-content\") pod \"c8ba81bf-9e75-4740-95f6-01b2846b54db\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.865620 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-utilities" (OuterVolumeSpecName: "utilities") pod "c8ba81bf-9e75-4740-95f6-01b2846b54db" (UID: "c8ba81bf-9e75-4740-95f6-01b2846b54db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.870937 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8ba81bf-9e75-4740-95f6-01b2846b54db-kube-api-access-j25sl" (OuterVolumeSpecName: "kube-api-access-j25sl") pod "c8ba81bf-9e75-4740-95f6-01b2846b54db" (UID: "c8ba81bf-9e75-4740-95f6-01b2846b54db"). InnerVolumeSpecName "kube-api-access-j25sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.899129 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8ba81bf-9e75-4740-95f6-01b2846b54db" (UID: "c8ba81bf-9e75-4740-95f6-01b2846b54db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.967257 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j25sl\" (UniqueName: \"kubernetes.io/projected/c8ba81bf-9e75-4740-95f6-01b2846b54db-kube-api-access-j25sl\") on node \"crc\" DevicePath \"\"" Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.967310 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.967332 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.405351 4761 generic.go:334] "Generic (PLEG): container finished" podID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerID="e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e" exitCode=0 Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.405407 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsdvs" event={"ID":"c8ba81bf-9e75-4740-95f6-01b2846b54db","Type":"ContainerDied","Data":"e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e"} Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.405453 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.405481 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsdvs" event={"ID":"c8ba81bf-9e75-4740-95f6-01b2846b54db","Type":"ContainerDied","Data":"7edb344ffb9302370aa45ab12046fbf98672eec571496cc0abbfb65fc948f49e"} Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.405512 4761 scope.go:117] "RemoveContainer" containerID="e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.427578 4761 scope.go:117] "RemoveContainer" containerID="7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.448761 4761 scope.go:117] "RemoveContainer" containerID="62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.449598 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsdvs"] Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.474690 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsdvs"] Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.492832 4761 scope.go:117] "RemoveContainer" containerID="e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e" Mar 07 08:05:08 crc kubenswrapper[4761]: E0307 08:05:08.493300 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e\": container with ID starting with e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e not found: ID does not exist" containerID="e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.493358 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e"} err="failed to get container status \"e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e\": rpc error: code = NotFound desc = could not find container \"e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e\": container with ID starting with e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e not found: ID does not exist" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.493384 4761 scope.go:117] "RemoveContainer" containerID="7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a" Mar 07 08:05:08 crc kubenswrapper[4761]: E0307 08:05:08.493887 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a\": container with ID starting with 7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a not found: ID does not exist" containerID="7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.493928 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a"} err="failed to get container status \"7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a\": rpc error: code = NotFound desc = could not find container \"7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a\": container with ID starting with 7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a not found: ID does not exist" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.493950 4761 scope.go:117] "RemoveContainer" containerID="62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611" Mar 07 08:05:08 crc kubenswrapper[4761]: E0307 08:05:08.494237 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611\": container with ID starting with 62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611 not found: ID does not exist" containerID="62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.494268 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611"} err="failed to get container status \"62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611\": rpc error: code = NotFound desc = could not find container \"62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611\": container with ID starting with 62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611 not found: ID does not exist" Mar 07 08:05:09 crc kubenswrapper[4761]: I0307 08:05:09.715730 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" path="/var/lib/kubelet/pods/c8ba81bf-9e75-4740-95f6-01b2846b54db/volumes" Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.966433 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76"] Mar 07 08:05:23 crc kubenswrapper[4761]: E0307 08:05:23.967232 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="extract-content" Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.967247 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="extract-content" Mar 07 08:05:23 crc kubenswrapper[4761]: E0307 08:05:23.967259 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="extract-utilities" Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.967266 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="extract-utilities" Mar 07 08:05:23 crc kubenswrapper[4761]: E0307 08:05:23.967284 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="registry-server" Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.967290 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="registry-server" Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.967434 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="registry-server" Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.968541 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.970292 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76"] Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.970393 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.070363 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.070658 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.070740 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7vnr\" (UniqueName: \"kubernetes.io/projected/1d21bb59-ff27-4146-a566-a48cad049a17-kube-api-access-c7vnr\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.172501 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.172583 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.172650 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7vnr\" (UniqueName: \"kubernetes.io/projected/1d21bb59-ff27-4146-a566-a48cad049a17-kube-api-access-c7vnr\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.173747 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.174076 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.207887 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7vnr\" (UniqueName: \"kubernetes.io/projected/1d21bb59-ff27-4146-a566-a48cad049a17-kube-api-access-c7vnr\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.293562 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.552743 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76"] Mar 07 08:05:25 crc kubenswrapper[4761]: I0307 08:05:25.544869 4761 generic.go:334] "Generic (PLEG): container finished" podID="1d21bb59-ff27-4146-a566-a48cad049a17" containerID="f483bf7f40142dbb57275b2dbdf67d4e06b47191e66a396e893eeb431087438a" exitCode=0 Mar 07 08:05:25 crc kubenswrapper[4761]: I0307 08:05:25.544915 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" event={"ID":"1d21bb59-ff27-4146-a566-a48cad049a17","Type":"ContainerDied","Data":"f483bf7f40142dbb57275b2dbdf67d4e06b47191e66a396e893eeb431087438a"} Mar 07 08:05:25 crc kubenswrapper[4761]: I0307 08:05:25.544943 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" event={"ID":"1d21bb59-ff27-4146-a566-a48cad049a17","Type":"ContainerStarted","Data":"7e1963379c43042362d1532ffdc010dadcf4de1983098cd8484aa24f9a812bf5"} Mar 07 08:05:27 crc kubenswrapper[4761]: I0307 08:05:27.563651 4761 generic.go:334] "Generic (PLEG): container finished" podID="1d21bb59-ff27-4146-a566-a48cad049a17" containerID="d8a5f0670fc1b3f1a07d994fbb5d5cca587a86009d9db24fa8ae49ac1d29f6d9" exitCode=0 Mar 07 08:05:27 crc kubenswrapper[4761]: I0307 08:05:27.563695 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" event={"ID":"1d21bb59-ff27-4146-a566-a48cad049a17","Type":"ContainerDied","Data":"d8a5f0670fc1b3f1a07d994fbb5d5cca587a86009d9db24fa8ae49ac1d29f6d9"} Mar 07 08:05:28 crc kubenswrapper[4761]: I0307 08:05:28.573157 4761 generic.go:334] "Generic (PLEG): container finished" podID="1d21bb59-ff27-4146-a566-a48cad049a17" containerID="7a3110ab683cfefcd49bcaa10442f67989b109fe8848a5100fbc785dbf8c5537" exitCode=0 Mar 07 08:05:28 crc kubenswrapper[4761]: I0307 08:05:28.573208 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" event={"ID":"1d21bb59-ff27-4146-a566-a48cad049a17","Type":"ContainerDied","Data":"7a3110ab683cfefcd49bcaa10442f67989b109fe8848a5100fbc785dbf8c5537"} Mar 07 08:05:29 crc kubenswrapper[4761]: I0307 08:05:29.921873 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.070848 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-bundle\") pod \"1d21bb59-ff27-4146-a566-a48cad049a17\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.070907 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7vnr\" (UniqueName: \"kubernetes.io/projected/1d21bb59-ff27-4146-a566-a48cad049a17-kube-api-access-c7vnr\") pod \"1d21bb59-ff27-4146-a566-a48cad049a17\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.071012 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-util\") pod \"1d21bb59-ff27-4146-a566-a48cad049a17\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.071689 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-bundle" (OuterVolumeSpecName: "bundle") pod "1d21bb59-ff27-4146-a566-a48cad049a17" (UID: "1d21bb59-ff27-4146-a566-a48cad049a17"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.083583 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d21bb59-ff27-4146-a566-a48cad049a17-kube-api-access-c7vnr" (OuterVolumeSpecName: "kube-api-access-c7vnr") pod "1d21bb59-ff27-4146-a566-a48cad049a17" (UID: "1d21bb59-ff27-4146-a566-a48cad049a17"). InnerVolumeSpecName "kube-api-access-c7vnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.084183 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-util" (OuterVolumeSpecName: "util") pod "1d21bb59-ff27-4146-a566-a48cad049a17" (UID: "1d21bb59-ff27-4146-a566-a48cad049a17"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.173093 4761 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.173490 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7vnr\" (UniqueName: \"kubernetes.io/projected/1d21bb59-ff27-4146-a566-a48cad049a17-kube-api-access-c7vnr\") on node \"crc\" DevicePath \"\"" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.173520 4761 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-util\") on node \"crc\" DevicePath \"\"" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.588117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" event={"ID":"1d21bb59-ff27-4146-a566-a48cad049a17","Type":"ContainerDied","Data":"7e1963379c43042362d1532ffdc010dadcf4de1983098cd8484aa24f9a812bf5"} Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.588457 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e1963379c43042362d1532ffdc010dadcf4de1983098cd8484aa24f9a812bf5" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.588181 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.212316 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-9894j"] Mar 07 08:05:33 crc kubenswrapper[4761]: E0307 08:05:33.212923 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d21bb59-ff27-4146-a566-a48cad049a17" containerName="util" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.212940 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d21bb59-ff27-4146-a566-a48cad049a17" containerName="util" Mar 07 08:05:33 crc kubenswrapper[4761]: E0307 08:05:33.212966 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d21bb59-ff27-4146-a566-a48cad049a17" containerName="pull" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.212974 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d21bb59-ff27-4146-a566-a48cad049a17" containerName="pull" Mar 07 08:05:33 crc kubenswrapper[4761]: E0307 08:05:33.212995 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d21bb59-ff27-4146-a566-a48cad049a17" containerName="extract" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.213005 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d21bb59-ff27-4146-a566-a48cad049a17" containerName="extract" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.213157 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d21bb59-ff27-4146-a566-a48cad049a17" containerName="extract" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.213810 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.215481 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.215812 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-sj6rw" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.216009 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.221394 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-9894j"] Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.325465 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl9jq\" (UniqueName: \"kubernetes.io/projected/379eee65-d23d-4c2e-94fe-254d7069d0e6-kube-api-access-rl9jq\") pod \"nmstate-operator-75c5dccd6c-9894j\" (UID: \"379eee65-d23d-4c2e-94fe-254d7069d0e6\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.427841 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl9jq\" (UniqueName: \"kubernetes.io/projected/379eee65-d23d-4c2e-94fe-254d7069d0e6-kube-api-access-rl9jq\") pod \"nmstate-operator-75c5dccd6c-9894j\" (UID: \"379eee65-d23d-4c2e-94fe-254d7069d0e6\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.447565 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl9jq\" (UniqueName: \"kubernetes.io/projected/379eee65-d23d-4c2e-94fe-254d7069d0e6-kube-api-access-rl9jq\") pod \"nmstate-operator-75c5dccd6c-9894j\" (UID: \"379eee65-d23d-4c2e-94fe-254d7069d0e6\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.542363 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.983687 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-9894j"] Mar 07 08:05:34 crc kubenswrapper[4761]: I0307 08:05:34.617204 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" event={"ID":"379eee65-d23d-4c2e-94fe-254d7069d0e6","Type":"ContainerStarted","Data":"4df120357bef8a49bf0e26971159a01b8400641bb16e695310c0d57d1dc61ea5"} Mar 07 08:05:36 crc kubenswrapper[4761]: I0307 08:05:36.632314 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" event={"ID":"379eee65-d23d-4c2e-94fe-254d7069d0e6","Type":"ContainerStarted","Data":"b55e32ea1756a7ff1963f3565ae1416bd953fbfb12a37f006c4704b82ce4af6d"} Mar 07 08:05:36 crc kubenswrapper[4761]: I0307 08:05:36.650608 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" podStartSLOduration=1.5133644579999999 podStartE2EDuration="3.650583854s" podCreationTimestamp="2026-03-07 08:05:33 +0000 UTC" firstStartedPulling="2026-03-07 08:05:33.990214125 +0000 UTC m=+990.899380590" lastFinishedPulling="2026-03-07 08:05:36.127433501 +0000 UTC m=+993.036599986" observedRunningTime="2026-03-07 08:05:36.64385829 +0000 UTC m=+993.553024785" watchObservedRunningTime="2026-03-07 08:05:36.650583854 +0000 UTC m=+993.559750329" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.625135 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-jmzd9"] Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.626516 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.628827 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-8rngz" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.646038 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-jmzd9"] Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.652538 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-vrchq"] Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.653482 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.671479 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.697796 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-vrchq"] Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.746495 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-p788d"] Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.747397 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.795942 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdq8l\" (UniqueName: \"kubernetes.io/projected/fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3-kube-api-access-rdq8l\") pod \"nmstate-webhook-786f45cff4-vrchq\" (UID: \"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.796009 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-vrchq\" (UID: \"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.796041 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt6ct\" (UniqueName: \"kubernetes.io/projected/e9969064-2a65-4728-b9b2-8a02da45bacb-kube-api-access-mt6ct\") pod \"nmstate-metrics-69594cc75-jmzd9\" (UID: \"e9969064-2a65-4728-b9b2-8a02da45bacb\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.835045 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26"] Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.835889 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.838811 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-gr6g6" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.838900 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.838930 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.852338 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26"] Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.901426 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-nmstate-lock\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.901498 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-dbus-socket\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.901527 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8nf2\" (UniqueName: \"kubernetes.io/projected/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-kube-api-access-p8nf2\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.901556 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdq8l\" (UniqueName: \"kubernetes.io/projected/fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3-kube-api-access-rdq8l\") pod \"nmstate-webhook-786f45cff4-vrchq\" (UID: \"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.901602 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-vrchq\" (UID: \"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.901634 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-ovs-socket\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.901669 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt6ct\" (UniqueName: \"kubernetes.io/projected/e9969064-2a65-4728-b9b2-8a02da45bacb-kube-api-access-mt6ct\") pod \"nmstate-metrics-69594cc75-jmzd9\" (UID: \"e9969064-2a65-4728-b9b2-8a02da45bacb\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.908579 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-vrchq\" (UID: \"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.923043 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt6ct\" (UniqueName: \"kubernetes.io/projected/e9969064-2a65-4728-b9b2-8a02da45bacb-kube-api-access-mt6ct\") pod \"nmstate-metrics-69594cc75-jmzd9\" (UID: \"e9969064-2a65-4728-b9b2-8a02da45bacb\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.932331 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdq8l\" (UniqueName: \"kubernetes.io/projected/fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3-kube-api-access-rdq8l\") pod \"nmstate-webhook-786f45cff4-vrchq\" (UID: \"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.941358 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.003613 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8nf2\" (UniqueName: \"kubernetes.io/projected/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-kube-api-access-p8nf2\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.003769 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-ovs-socket\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.003815 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxk9c\" (UniqueName: \"kubernetes.io/projected/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-kube-api-access-dxk9c\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.003900 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.003957 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-nmstate-lock\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.004008 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-nmstate-lock\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.004013 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.004056 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-ovs-socket\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.004115 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-dbus-socket\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.004433 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-dbus-socket\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.013548 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.041381 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8nf2\" (UniqueName: \"kubernetes.io/projected/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-kube-api-access-p8nf2\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.042686 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5dd9c59c48-q98tn"] Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.043592 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.050336 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dd9c59c48-q98tn"] Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.062428 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.108385 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxk9c\" (UniqueName: \"kubernetes.io/projected/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-kube-api-access-dxk9c\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.108505 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.108567 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: E0307 08:05:38.108711 4761 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 07 08:05:38 crc kubenswrapper[4761]: E0307 08:05:38.108789 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-plugin-serving-cert podName:b295a49c-b8ec-45ab-a04e-b08d9fafe91b nodeName:}" failed. No retries permitted until 2026-03-07 08:05:38.608766727 +0000 UTC m=+995.517933212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-plugin-serving-cert") pod "nmstate-console-plugin-5dcbbd79cf-nhw26" (UID: "b295a49c-b8ec-45ab-a04e-b08d9fafe91b") : secret "plugin-serving-cert" not found Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.110149 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.133525 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxk9c\" (UniqueName: \"kubernetes.io/projected/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-kube-api-access-dxk9c\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.209984 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-trusted-ca-bundle\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.210272 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-serving-cert\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.210325 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-config\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.210353 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzrm2\" (UniqueName: \"kubernetes.io/projected/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-kube-api-access-gzrm2\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.210375 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-oauth-config\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.210390 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-oauth-serving-cert\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.210405 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-service-ca\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.311651 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-config\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.311726 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzrm2\" (UniqueName: \"kubernetes.io/projected/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-kube-api-access-gzrm2\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.311759 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-oauth-config\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.311784 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-oauth-serving-cert\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.311809 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-service-ca\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.311858 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-trusted-ca-bundle\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.311940 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-serving-cert\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.312764 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-config\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.312841 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-oauth-serving-cert\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.313342 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-service-ca\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.313696 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-trusted-ca-bundle\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.316667 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-oauth-config\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.317089 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-serving-cert\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.328384 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzrm2\" (UniqueName: \"kubernetes.io/projected/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-kube-api-access-gzrm2\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.434238 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.502223 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-jmzd9"] Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.594446 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-vrchq"] Mar 07 08:05:38 crc kubenswrapper[4761]: W0307 08:05:38.599838 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe4dc2d0_278c_4d1c_952a_20cd07e1cdf3.slice/crio-f3eb07648394d2fa817ddc67a7d823d999527134cb1e5aaf67fa95f1898a120f WatchSource:0}: Error finding container f3eb07648394d2fa817ddc67a7d823d999527134cb1e5aaf67fa95f1898a120f: Status 404 returned error can't find the container with id f3eb07648394d2fa817ddc67a7d823d999527134cb1e5aaf67fa95f1898a120f Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.619142 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.623261 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.650389 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" event={"ID":"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3","Type":"ContainerStarted","Data":"f3eb07648394d2fa817ddc67a7d823d999527134cb1e5aaf67fa95f1898a120f"} Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.654166 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-p788d" event={"ID":"37e4e36d-77bd-4618-8b4d-4653a71a0f2e","Type":"ContainerStarted","Data":"98bd3795dc7c911566451e6d4a7c539d2a9bc83c1d2715eb98e06c5b5f12928e"} Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.654598 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dd9c59c48-q98tn"] Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.656175 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" event={"ID":"e9969064-2a65-4728-b9b2-8a02da45bacb","Type":"ContainerStarted","Data":"50282ef0d099b69611fad0feb675ad74d8e5081f27eb358d8bbbbe83ecaa0340"} Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.751339 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:39 crc kubenswrapper[4761]: I0307 08:05:39.257397 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26"] Mar 07 08:05:39 crc kubenswrapper[4761]: I0307 08:05:39.664455 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" event={"ID":"b295a49c-b8ec-45ab-a04e-b08d9fafe91b","Type":"ContainerStarted","Data":"d8c44a571e1cb3b073b9b08e62ab7c12767553967f37edf97754dcf7c1d58a40"} Mar 07 08:05:39 crc kubenswrapper[4761]: I0307 08:05:39.666120 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dd9c59c48-q98tn" event={"ID":"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc","Type":"ContainerStarted","Data":"771fcc82e9e174ecccd0b64d2b97c51eaa31b3eb4f5e46854d449a2314c1bfae"} Mar 07 08:05:39 crc kubenswrapper[4761]: I0307 08:05:39.666151 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dd9c59c48-q98tn" event={"ID":"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc","Type":"ContainerStarted","Data":"f3ceda7127d4a5ed6071b386f7c8619bc08af08837dc47cb8e39f89c79cb88f3"} Mar 07 08:05:39 crc kubenswrapper[4761]: I0307 08:05:39.712847 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5dd9c59c48-q98tn" podStartSLOduration=1.712820094 podStartE2EDuration="1.712820094s" podCreationTimestamp="2026-03-07 08:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:05:39.694279711 +0000 UTC m=+996.603446226" watchObservedRunningTime="2026-03-07 08:05:39.712820094 +0000 UTC m=+996.621986589" Mar 07 08:05:41 crc kubenswrapper[4761]: I0307 08:05:41.689142 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" event={"ID":"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3","Type":"ContainerStarted","Data":"9aa26111612918de87836b0f210afd72785c8e5fdcc198c78d82f84f53cf0e3d"} Mar 07 08:05:41 crc kubenswrapper[4761]: I0307 08:05:41.689801 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:41 crc kubenswrapper[4761]: I0307 08:05:41.693818 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-p788d" event={"ID":"37e4e36d-77bd-4618-8b4d-4653a71a0f2e","Type":"ContainerStarted","Data":"9fa495112beb549350671e504c596054795a24e7f379fa8f11d5084541b92a14"} Mar 07 08:05:41 crc kubenswrapper[4761]: I0307 08:05:41.693925 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:41 crc kubenswrapper[4761]: I0307 08:05:41.695884 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" event={"ID":"e9969064-2a65-4728-b9b2-8a02da45bacb","Type":"ContainerStarted","Data":"3089b87dee68614e4ca8dffc3955693260dd5e818b679c84458904b4b36a1b3a"} Mar 07 08:05:41 crc kubenswrapper[4761]: I0307 08:05:41.712524 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" podStartSLOduration=2.392693472 podStartE2EDuration="4.712500168s" podCreationTimestamp="2026-03-07 08:05:37 +0000 UTC" firstStartedPulling="2026-03-07 08:05:38.603385794 +0000 UTC m=+995.512552269" lastFinishedPulling="2026-03-07 08:05:40.92319249 +0000 UTC m=+997.832358965" observedRunningTime="2026-03-07 08:05:41.706692766 +0000 UTC m=+998.615859241" watchObservedRunningTime="2026-03-07 08:05:41.712500168 +0000 UTC m=+998.621666643" Mar 07 08:05:41 crc kubenswrapper[4761]: I0307 08:05:41.728053 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-p788d" podStartSLOduration=1.908883739 podStartE2EDuration="4.728035028s" podCreationTimestamp="2026-03-07 08:05:37 +0000 UTC" firstStartedPulling="2026-03-07 08:05:38.139120089 +0000 UTC m=+995.048286564" lastFinishedPulling="2026-03-07 08:05:40.958271378 +0000 UTC m=+997.867437853" observedRunningTime="2026-03-07 08:05:41.725451925 +0000 UTC m=+998.634618400" watchObservedRunningTime="2026-03-07 08:05:41.728035028 +0000 UTC m=+998.637201503" Mar 07 08:05:42 crc kubenswrapper[4761]: I0307 08:05:42.709219 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" event={"ID":"b295a49c-b8ec-45ab-a04e-b08d9fafe91b","Type":"ContainerStarted","Data":"d781bd5e3d0a5362af2fa3098ae6fe820167ba8421a9003466e26fd27e68c3bf"} Mar 07 08:05:43 crc kubenswrapper[4761]: I0307 08:05:43.727755 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" podStartSLOduration=3.903629233 podStartE2EDuration="6.727735453s" podCreationTimestamp="2026-03-07 08:05:37 +0000 UTC" firstStartedPulling="2026-03-07 08:05:39.262993282 +0000 UTC m=+996.172159747" lastFinishedPulling="2026-03-07 08:05:42.087099492 +0000 UTC m=+998.996265967" observedRunningTime="2026-03-07 08:05:42.739568116 +0000 UTC m=+999.648734601" watchObservedRunningTime="2026-03-07 08:05:43.727735453 +0000 UTC m=+1000.636901928" Mar 07 08:05:44 crc kubenswrapper[4761]: I0307 08:05:44.728493 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" event={"ID":"e9969064-2a65-4728-b9b2-8a02da45bacb","Type":"ContainerStarted","Data":"976fc68a5623f20bd7c5547749f9503113dd606cc46781836632365ad9bf4fb5"} Mar 07 08:05:44 crc kubenswrapper[4761]: I0307 08:05:44.743920 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" podStartSLOduration=2.616372218 podStartE2EDuration="7.743877273s" podCreationTimestamp="2026-03-07 08:05:37 +0000 UTC" firstStartedPulling="2026-03-07 08:05:38.511017627 +0000 UTC m=+995.420184092" lastFinishedPulling="2026-03-07 08:05:43.638522672 +0000 UTC m=+1000.547689147" observedRunningTime="2026-03-07 08:05:44.743000762 +0000 UTC m=+1001.652167247" watchObservedRunningTime="2026-03-07 08:05:44.743877273 +0000 UTC m=+1001.653043748" Mar 07 08:05:48 crc kubenswrapper[4761]: I0307 08:05:48.089090 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:48 crc kubenswrapper[4761]: I0307 08:05:48.434745 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:48 crc kubenswrapper[4761]: I0307 08:05:48.434796 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:48 crc kubenswrapper[4761]: I0307 08:05:48.441519 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:48 crc kubenswrapper[4761]: I0307 08:05:48.760146 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:48 crc kubenswrapper[4761]: I0307 08:05:48.823098 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57ff97798b-fglrq"] Mar 07 08:05:58 crc kubenswrapper[4761]: I0307 08:05:58.027653 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.138171 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547846-tz9jt"] Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.139390 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547846-tz9jt" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.142569 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.143029 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.143359 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.154709 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547846-tz9jt"] Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.331567 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fb2h\" (UniqueName: \"kubernetes.io/projected/c3231b68-1f7c-4c26-b4c8-887862d28e06-kube-api-access-8fb2h\") pod \"auto-csr-approver-29547846-tz9jt\" (UID: \"c3231b68-1f7c-4c26-b4c8-887862d28e06\") " pod="openshift-infra/auto-csr-approver-29547846-tz9jt" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.433655 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fb2h\" (UniqueName: \"kubernetes.io/projected/c3231b68-1f7c-4c26-b4c8-887862d28e06-kube-api-access-8fb2h\") pod \"auto-csr-approver-29547846-tz9jt\" (UID: \"c3231b68-1f7c-4c26-b4c8-887862d28e06\") " pod="openshift-infra/auto-csr-approver-29547846-tz9jt" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.454691 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fb2h\" (UniqueName: \"kubernetes.io/projected/c3231b68-1f7c-4c26-b4c8-887862d28e06-kube-api-access-8fb2h\") pod \"auto-csr-approver-29547846-tz9jt\" (UID: \"c3231b68-1f7c-4c26-b4c8-887862d28e06\") " pod="openshift-infra/auto-csr-approver-29547846-tz9jt" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.462177 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547846-tz9jt" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.981970 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547846-tz9jt"] Mar 07 08:06:01 crc kubenswrapper[4761]: I0307 08:06:01.854643 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547846-tz9jt" event={"ID":"c3231b68-1f7c-4c26-b4c8-887862d28e06","Type":"ContainerStarted","Data":"c2a01c347d5443a6f94315169ad3e585559155ff3d46c7cc3ee9d65b71a01fcb"} Mar 07 08:06:02 crc kubenswrapper[4761]: I0307 08:06:02.867381 4761 generic.go:334] "Generic (PLEG): container finished" podID="c3231b68-1f7c-4c26-b4c8-887862d28e06" containerID="968fafc69d37a3fd58309d6988cdcb39d53648dbd54cc347939d1e9351949eab" exitCode=0 Mar 07 08:06:02 crc kubenswrapper[4761]: I0307 08:06:02.867650 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547846-tz9jt" event={"ID":"c3231b68-1f7c-4c26-b4c8-887862d28e06","Type":"ContainerDied","Data":"968fafc69d37a3fd58309d6988cdcb39d53648dbd54cc347939d1e9351949eab"} Mar 07 08:06:04 crc kubenswrapper[4761]: I0307 08:06:04.174271 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547846-tz9jt" Mar 07 08:06:04 crc kubenswrapper[4761]: I0307 08:06:04.299212 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fb2h\" (UniqueName: \"kubernetes.io/projected/c3231b68-1f7c-4c26-b4c8-887862d28e06-kube-api-access-8fb2h\") pod \"c3231b68-1f7c-4c26-b4c8-887862d28e06\" (UID: \"c3231b68-1f7c-4c26-b4c8-887862d28e06\") " Mar 07 08:06:04 crc kubenswrapper[4761]: I0307 08:06:04.306692 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3231b68-1f7c-4c26-b4c8-887862d28e06-kube-api-access-8fb2h" (OuterVolumeSpecName: "kube-api-access-8fb2h") pod "c3231b68-1f7c-4c26-b4c8-887862d28e06" (UID: "c3231b68-1f7c-4c26-b4c8-887862d28e06"). InnerVolumeSpecName "kube-api-access-8fb2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:06:04 crc kubenswrapper[4761]: I0307 08:06:04.402566 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fb2h\" (UniqueName: \"kubernetes.io/projected/c3231b68-1f7c-4c26-b4c8-887862d28e06-kube-api-access-8fb2h\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:04 crc kubenswrapper[4761]: I0307 08:06:04.884971 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547846-tz9jt" event={"ID":"c3231b68-1f7c-4c26-b4c8-887862d28e06","Type":"ContainerDied","Data":"c2a01c347d5443a6f94315169ad3e585559155ff3d46c7cc3ee9d65b71a01fcb"} Mar 07 08:06:04 crc kubenswrapper[4761]: I0307 08:06:04.885008 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2a01c347d5443a6f94315169ad3e585559155ff3d46c7cc3ee9d65b71a01fcb" Mar 07 08:06:04 crc kubenswrapper[4761]: I0307 08:06:04.885064 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547846-tz9jt" Mar 07 08:06:05 crc kubenswrapper[4761]: I0307 08:06:05.236613 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547840-c7fc5"] Mar 07 08:06:05 crc kubenswrapper[4761]: I0307 08:06:05.251236 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547840-c7fc5"] Mar 07 08:06:05 crc kubenswrapper[4761]: I0307 08:06:05.714978 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="438f4d3e-a816-40a9-9518-588b04476491" path="/var/lib/kubelet/pods/438f4d3e-a816-40a9-9518-588b04476491/volumes" Mar 07 08:06:13 crc kubenswrapper[4761]: I0307 08:06:13.877455 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-57ff97798b-fglrq" podUID="0c90daf5-8fd7-4370-81d3-593760b7886f" containerName="console" containerID="cri-o://2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0" gracePeriod=15 Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.266963 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57ff97798b-fglrq_0c90daf5-8fd7-4370-81d3-593760b7886f/console/0.log" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.267056 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.395134 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-serving-cert\") pod \"0c90daf5-8fd7-4370-81d3-593760b7886f\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.395180 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-oauth-config\") pod \"0c90daf5-8fd7-4370-81d3-593760b7886f\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.395210 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-oauth-serving-cert\") pod \"0c90daf5-8fd7-4370-81d3-593760b7886f\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.395287 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-service-ca\") pod \"0c90daf5-8fd7-4370-81d3-593760b7886f\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.395318 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwkvp\" (UniqueName: \"kubernetes.io/projected/0c90daf5-8fd7-4370-81d3-593760b7886f-kube-api-access-rwkvp\") pod \"0c90daf5-8fd7-4370-81d3-593760b7886f\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.395406 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-trusted-ca-bundle\") pod \"0c90daf5-8fd7-4370-81d3-593760b7886f\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.395424 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-console-config\") pod \"0c90daf5-8fd7-4370-81d3-593760b7886f\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.396244 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-console-config" (OuterVolumeSpecName: "console-config") pod "0c90daf5-8fd7-4370-81d3-593760b7886f" (UID: "0c90daf5-8fd7-4370-81d3-593760b7886f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.396656 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-service-ca" (OuterVolumeSpecName: "service-ca") pod "0c90daf5-8fd7-4370-81d3-593760b7886f" (UID: "0c90daf5-8fd7-4370-81d3-593760b7886f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.396955 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0c90daf5-8fd7-4370-81d3-593760b7886f" (UID: "0c90daf5-8fd7-4370-81d3-593760b7886f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.397251 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0c90daf5-8fd7-4370-81d3-593760b7886f" (UID: "0c90daf5-8fd7-4370-81d3-593760b7886f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.401923 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0c90daf5-8fd7-4370-81d3-593760b7886f" (UID: "0c90daf5-8fd7-4370-81d3-593760b7886f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.412683 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0c90daf5-8fd7-4370-81d3-593760b7886f" (UID: "0c90daf5-8fd7-4370-81d3-593760b7886f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.420848 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c90daf5-8fd7-4370-81d3-593760b7886f-kube-api-access-rwkvp" (OuterVolumeSpecName: "kube-api-access-rwkvp") pod "0c90daf5-8fd7-4370-81d3-593760b7886f" (UID: "0c90daf5-8fd7-4370-81d3-593760b7886f"). InnerVolumeSpecName "kube-api-access-rwkvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.497040 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.497080 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwkvp\" (UniqueName: \"kubernetes.io/projected/0c90daf5-8fd7-4370-81d3-593760b7886f-kube-api-access-rwkvp\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.497094 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.497105 4761 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-console-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.497116 4761 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.497129 4761 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.497140 4761 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.973605 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57ff97798b-fglrq_0c90daf5-8fd7-4370-81d3-593760b7886f/console/0.log" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.973649 4761 generic.go:334] "Generic (PLEG): container finished" podID="0c90daf5-8fd7-4370-81d3-593760b7886f" containerID="2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0" exitCode=2 Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.973685 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57ff97798b-fglrq" event={"ID":"0c90daf5-8fd7-4370-81d3-593760b7886f","Type":"ContainerDied","Data":"2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0"} Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.973749 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57ff97798b-fglrq" event={"ID":"0c90daf5-8fd7-4370-81d3-593760b7886f","Type":"ContainerDied","Data":"efa8419c67761e6f44973550c8c4891d02eea844e1a31bf44687eb18787132b7"} Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.973756 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.973772 4761 scope.go:117] "RemoveContainer" containerID="2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.994382 4761 scope.go:117] "RemoveContainer" containerID="2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0" Mar 07 08:06:14 crc kubenswrapper[4761]: E0307 08:06:14.994939 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0\": container with ID starting with 2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0 not found: ID does not exist" containerID="2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.994981 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0"} err="failed to get container status \"2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0\": rpc error: code = NotFound desc = could not find container \"2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0\": container with ID starting with 2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0 not found: ID does not exist" Mar 07 08:06:15 crc kubenswrapper[4761]: I0307 08:06:15.008238 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57ff97798b-fglrq"] Mar 07 08:06:15 crc kubenswrapper[4761]: I0307 08:06:15.016294 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-57ff97798b-fglrq"] Mar 07 08:06:15 crc kubenswrapper[4761]: I0307 08:06:15.715176 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c90daf5-8fd7-4370-81d3-593760b7886f" path="/var/lib/kubelet/pods/0c90daf5-8fd7-4370-81d3-593760b7886f/volumes" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.081471 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq"] Mar 07 08:06:16 crc kubenswrapper[4761]: E0307 08:06:16.081993 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c90daf5-8fd7-4370-81d3-593760b7886f" containerName="console" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.082023 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c90daf5-8fd7-4370-81d3-593760b7886f" containerName="console" Mar 07 08:06:16 crc kubenswrapper[4761]: E0307 08:06:16.082046 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3231b68-1f7c-4c26-b4c8-887862d28e06" containerName="oc" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.082066 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3231b68-1f7c-4c26-b4c8-887862d28e06" containerName="oc" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.082300 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3231b68-1f7c-4c26-b4c8-887862d28e06" containerName="oc" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.082339 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c90daf5-8fd7-4370-81d3-593760b7886f" containerName="console" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.084513 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.087675 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.098576 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq"] Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.223347 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.223425 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.223497 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7khz\" (UniqueName: \"kubernetes.io/projected/e79675f7-d335-4f19-b872-22f70dccc150-kube-api-access-l7khz\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.325109 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.325159 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.325203 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7khz\" (UniqueName: \"kubernetes.io/projected/e79675f7-d335-4f19-b872-22f70dccc150-kube-api-access-l7khz\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.325904 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.325927 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.358532 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7khz\" (UniqueName: \"kubernetes.io/projected/e79675f7-d335-4f19-b872-22f70dccc150-kube-api-access-l7khz\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.406359 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.863301 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq"] Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.998001 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" event={"ID":"e79675f7-d335-4f19-b872-22f70dccc150","Type":"ContainerStarted","Data":"29cfb6982fec8bb3921e6e06458f25b43ad14b572aa560bd435e08a978dd9b53"} Mar 07 08:06:18 crc kubenswrapper[4761]: I0307 08:06:18.011511 4761 generic.go:334] "Generic (PLEG): container finished" podID="e79675f7-d335-4f19-b872-22f70dccc150" containerID="ece90ffdad3f36a739dffa4d59dc8187730b3d72597d1bdb9d2d8e09418becd4" exitCode=0 Mar 07 08:06:18 crc kubenswrapper[4761]: I0307 08:06:18.011644 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" event={"ID":"e79675f7-d335-4f19-b872-22f70dccc150","Type":"ContainerDied","Data":"ece90ffdad3f36a739dffa4d59dc8187730b3d72597d1bdb9d2d8e09418becd4"} Mar 07 08:06:20 crc kubenswrapper[4761]: I0307 08:06:20.030555 4761 generic.go:334] "Generic (PLEG): container finished" podID="e79675f7-d335-4f19-b872-22f70dccc150" containerID="6c71247254ce37831767591a16ed6fa4afc6bb1b9eb8e219c6ce709dcd3fab9a" exitCode=0 Mar 07 08:06:20 crc kubenswrapper[4761]: I0307 08:06:20.030713 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" event={"ID":"e79675f7-d335-4f19-b872-22f70dccc150","Type":"ContainerDied","Data":"6c71247254ce37831767591a16ed6fa4afc6bb1b9eb8e219c6ce709dcd3fab9a"} Mar 07 08:06:21 crc kubenswrapper[4761]: I0307 08:06:21.040324 4761 generic.go:334] "Generic (PLEG): container finished" podID="e79675f7-d335-4f19-b872-22f70dccc150" containerID="05224abf8a736536647b64896bef8af3ee92c3622a542d823c25b59a3d8ed6d8" exitCode=0 Mar 07 08:06:21 crc kubenswrapper[4761]: I0307 08:06:21.040383 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" event={"ID":"e79675f7-d335-4f19-b872-22f70dccc150","Type":"ContainerDied","Data":"05224abf8a736536647b64896bef8af3ee92c3622a542d823c25b59a3d8ed6d8"} Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.396375 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.527469 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-util\") pod \"e79675f7-d335-4f19-b872-22f70dccc150\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.527938 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7khz\" (UniqueName: \"kubernetes.io/projected/e79675f7-d335-4f19-b872-22f70dccc150-kube-api-access-l7khz\") pod \"e79675f7-d335-4f19-b872-22f70dccc150\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.528025 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-bundle\") pod \"e79675f7-d335-4f19-b872-22f70dccc150\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.530113 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-bundle" (OuterVolumeSpecName: "bundle") pod "e79675f7-d335-4f19-b872-22f70dccc150" (UID: "e79675f7-d335-4f19-b872-22f70dccc150"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.533532 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79675f7-d335-4f19-b872-22f70dccc150-kube-api-access-l7khz" (OuterVolumeSpecName: "kube-api-access-l7khz") pod "e79675f7-d335-4f19-b872-22f70dccc150" (UID: "e79675f7-d335-4f19-b872-22f70dccc150"). InnerVolumeSpecName "kube-api-access-l7khz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.630057 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7khz\" (UniqueName: \"kubernetes.io/projected/e79675f7-d335-4f19-b872-22f70dccc150-kube-api-access-l7khz\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.630092 4761 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.820229 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-util" (OuterVolumeSpecName: "util") pod "e79675f7-d335-4f19-b872-22f70dccc150" (UID: "e79675f7-d335-4f19-b872-22f70dccc150"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.832640 4761 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-util\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:23 crc kubenswrapper[4761]: I0307 08:06:23.061334 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" event={"ID":"e79675f7-d335-4f19-b872-22f70dccc150","Type":"ContainerDied","Data":"29cfb6982fec8bb3921e6e06458f25b43ad14b572aa560bd435e08a978dd9b53"} Mar 07 08:06:23 crc kubenswrapper[4761]: I0307 08:06:23.061380 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29cfb6982fec8bb3921e6e06458f25b43ad14b572aa560bd435e08a978dd9b53" Mar 07 08:06:23 crc kubenswrapper[4761]: I0307 08:06:23.061442 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.840849 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc"] Mar 07 08:06:29 crc kubenswrapper[4761]: E0307 08:06:29.842781 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79675f7-d335-4f19-b872-22f70dccc150" containerName="pull" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.842893 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79675f7-d335-4f19-b872-22f70dccc150" containerName="pull" Mar 07 08:06:29 crc kubenswrapper[4761]: E0307 08:06:29.842984 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79675f7-d335-4f19-b872-22f70dccc150" containerName="util" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.843087 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79675f7-d335-4f19-b872-22f70dccc150" containerName="util" Mar 07 08:06:29 crc kubenswrapper[4761]: E0307 08:06:29.843160 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79675f7-d335-4f19-b872-22f70dccc150" containerName="extract" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.843244 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79675f7-d335-4f19-b872-22f70dccc150" containerName="extract" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.843481 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e79675f7-d335-4f19-b872-22f70dccc150" containerName="extract" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.844200 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.847594 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.847978 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-q6v5d" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.848135 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.848312 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.848543 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.859707 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc"] Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.949204 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c23f924-b431-4a3e-819b-713e132885f4-apiservice-cert\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.949592 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c23f924-b431-4a3e-819b-713e132885f4-webhook-cert\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.949632 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc9pt\" (UniqueName: \"kubernetes.io/projected/4c23f924-b431-4a3e-819b-713e132885f4-kube-api-access-fc9pt\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.051239 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc9pt\" (UniqueName: \"kubernetes.io/projected/4c23f924-b431-4a3e-819b-713e132885f4-kube-api-access-fc9pt\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.051381 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c23f924-b431-4a3e-819b-713e132885f4-apiservice-cert\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.051445 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c23f924-b431-4a3e-819b-713e132885f4-webhook-cert\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.057156 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c23f924-b431-4a3e-819b-713e132885f4-apiservice-cert\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.057247 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c23f924-b431-4a3e-819b-713e132885f4-webhook-cert\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.081030 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc9pt\" (UniqueName: \"kubernetes.io/projected/4c23f924-b431-4a3e-819b-713e132885f4-kube-api-access-fc9pt\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.168005 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.169307 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6899cc684-8cx59"] Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.170216 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.172351 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.172570 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.172746 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-4xbn8" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.194129 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6899cc684-8cx59"] Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.254908 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrz2s\" (UniqueName: \"kubernetes.io/projected/3dc06a77-85c3-42a9-a972-c3f33e46df4b-kube-api-access-rrz2s\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.255094 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3dc06a77-85c3-42a9-a972-c3f33e46df4b-webhook-cert\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.255213 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3dc06a77-85c3-42a9-a972-c3f33e46df4b-apiservice-cert\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.356996 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrz2s\" (UniqueName: \"kubernetes.io/projected/3dc06a77-85c3-42a9-a972-c3f33e46df4b-kube-api-access-rrz2s\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.357401 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3dc06a77-85c3-42a9-a972-c3f33e46df4b-webhook-cert\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.357466 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3dc06a77-85c3-42a9-a972-c3f33e46df4b-apiservice-cert\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.377096 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3dc06a77-85c3-42a9-a972-c3f33e46df4b-webhook-cert\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.379762 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrz2s\" (UniqueName: \"kubernetes.io/projected/3dc06a77-85c3-42a9-a972-c3f33e46df4b-kube-api-access-rrz2s\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.385913 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3dc06a77-85c3-42a9-a972-c3f33e46df4b-apiservice-cert\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.535293 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.673592 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc"] Mar 07 08:06:30 crc kubenswrapper[4761]: W0307 08:06:30.681790 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c23f924_b431_4a3e_819b_713e132885f4.slice/crio-c44f038910d22ea5e95ea1d4d586af81b063436432da0cb9758814d01f7f29cb WatchSource:0}: Error finding container c44f038910d22ea5e95ea1d4d586af81b063436432da0cb9758814d01f7f29cb: Status 404 returned error can't find the container with id c44f038910d22ea5e95ea1d4d586af81b063436432da0cb9758814d01f7f29cb Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.970871 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6899cc684-8cx59"] Mar 07 08:06:30 crc kubenswrapper[4761]: W0307 08:06:30.977192 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dc06a77_85c3_42a9_a972_c3f33e46df4b.slice/crio-6a496aa87dce3c7de1493e6aa56d37e60a0afd585ccac8bf0194e01a069479f1 WatchSource:0}: Error finding container 6a496aa87dce3c7de1493e6aa56d37e60a0afd585ccac8bf0194e01a069479f1: Status 404 returned error can't find the container with id 6a496aa87dce3c7de1493e6aa56d37e60a0afd585ccac8bf0194e01a069479f1 Mar 07 08:06:31 crc kubenswrapper[4761]: I0307 08:06:31.129188 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" event={"ID":"4c23f924-b431-4a3e-819b-713e132885f4","Type":"ContainerStarted","Data":"c44f038910d22ea5e95ea1d4d586af81b063436432da0cb9758814d01f7f29cb"} Mar 07 08:06:31 crc kubenswrapper[4761]: I0307 08:06:31.131057 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" event={"ID":"3dc06a77-85c3-42a9-a972-c3f33e46df4b","Type":"ContainerStarted","Data":"6a496aa87dce3c7de1493e6aa56d37e60a0afd585ccac8bf0194e01a069479f1"} Mar 07 08:06:37 crc kubenswrapper[4761]: I0307 08:06:37.177140 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" event={"ID":"4c23f924-b431-4a3e-819b-713e132885f4","Type":"ContainerStarted","Data":"63c84d3254baf5d95dfdfc00082aac6d5f37aea286b07b39f9aa5191bea283bc"} Mar 07 08:06:37 crc kubenswrapper[4761]: I0307 08:06:37.177847 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:37 crc kubenswrapper[4761]: I0307 08:06:37.181356 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" event={"ID":"3dc06a77-85c3-42a9-a972-c3f33e46df4b","Type":"ContainerStarted","Data":"86236478da25e68057a2cee3c5365a7298b691e70eadb4671f40e0f1ed9dd870"} Mar 07 08:06:37 crc kubenswrapper[4761]: I0307 08:06:37.181534 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:37 crc kubenswrapper[4761]: I0307 08:06:37.204207 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" podStartSLOduration=2.860540443 podStartE2EDuration="8.204191922s" podCreationTimestamp="2026-03-07 08:06:29 +0000 UTC" firstStartedPulling="2026-03-07 08:06:30.699361428 +0000 UTC m=+1047.608527903" lastFinishedPulling="2026-03-07 08:06:36.043012907 +0000 UTC m=+1052.952179382" observedRunningTime="2026-03-07 08:06:37.20045009 +0000 UTC m=+1054.109616575" watchObservedRunningTime="2026-03-07 08:06:37.204191922 +0000 UTC m=+1054.113358397" Mar 07 08:06:37 crc kubenswrapper[4761]: I0307 08:06:37.231769 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podStartSLOduration=2.154198207 podStartE2EDuration="7.231752074s" podCreationTimestamp="2026-03-07 08:06:30 +0000 UTC" firstStartedPulling="2026-03-07 08:06:30.980850496 +0000 UTC m=+1047.890016961" lastFinishedPulling="2026-03-07 08:06:36.058404353 +0000 UTC m=+1052.967570828" observedRunningTime="2026-03-07 08:06:37.229741655 +0000 UTC m=+1054.138908160" watchObservedRunningTime="2026-03-07 08:06:37.231752074 +0000 UTC m=+1054.140918569" Mar 07 08:06:43 crc kubenswrapper[4761]: I0307 08:06:43.900565 4761 scope.go:117] "RemoveContainer" containerID="5963452c1289655e1fa326e8a7200c203507ffba57d60c3182b659ac7a387bdb" Mar 07 08:06:50 crc kubenswrapper[4761]: I0307 08:06:50.543320 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.172581 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.909878 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-lzrcd"] Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.912656 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.927062 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk"] Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.928009 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.928549 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.928824 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-95bjv" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.929228 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.950148 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.950847 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk"] Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013404 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-startup\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013459 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh2m7\" (UniqueName: \"kubernetes.io/projected/ffb7fdc9-854e-4990-81e1-b14fb9966476-kube-api-access-vh2m7\") pod \"frr-k8s-webhook-server-7f989f654f-4sfgk\" (UID: \"ffb7fdc9-854e-4990-81e1-b14fb9966476\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013500 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-reloader\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013620 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7brj8\" (UniqueName: \"kubernetes.io/projected/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-kube-api-access-7brj8\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013695 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-metrics\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013750 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-conf\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013815 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-sockets\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013890 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-metrics-certs\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013941 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb7fdc9-854e-4990-81e1-b14fb9966476-cert\") pod \"frr-k8s-webhook-server-7f989f654f-4sfgk\" (UID: \"ffb7fdc9-854e-4990-81e1-b14fb9966476\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.026280 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-75b4z"] Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.028059 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.031904 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.031904 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.032206 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.032274 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-6zwt5" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.038305 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-m2tp4"] Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.039406 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.042701 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.055652 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-m2tp4"] Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115608 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-startup\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115649 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh2m7\" (UniqueName: \"kubernetes.io/projected/ffb7fdc9-854e-4990-81e1-b14fb9966476-kube-api-access-vh2m7\") pod \"frr-k8s-webhook-server-7f989f654f-4sfgk\" (UID: \"ffb7fdc9-854e-4990-81e1-b14fb9966476\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115673 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-reloader\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115702 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7brj8\" (UniqueName: \"kubernetes.io/projected/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-kube-api-access-7brj8\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115762 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbcfq\" (UniqueName: \"kubernetes.io/projected/193543ae-839d-485e-a238-ae40e69f7b24-kube-api-access-bbcfq\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115788 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adfa916b-8977-446f-9387-932788e51e10-cert\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115809 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-metrics\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115841 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115864 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/adfa916b-8977-446f-9387-932788e51e10-metrics-certs\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115881 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-conf\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115924 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/193543ae-839d-485e-a238-ae40e69f7b24-metallb-excludel2\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115952 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-sockets\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115993 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-metrics-certs\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.116016 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-metrics-certs\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.116075 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb7fdc9-854e-4990-81e1-b14fb9966476-cert\") pod \"frr-k8s-webhook-server-7f989f654f-4sfgk\" (UID: \"ffb7fdc9-854e-4990-81e1-b14fb9966476\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.116106 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlkfs\" (UniqueName: \"kubernetes.io/projected/adfa916b-8977-446f-9387-932788e51e10-kube-api-access-wlkfs\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.116532 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-startup\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: E0307 08:07:11.116774 4761 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 07 08:07:11 crc kubenswrapper[4761]: E0307 08:07:11.116833 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffb7fdc9-854e-4990-81e1-b14fb9966476-cert podName:ffb7fdc9-854e-4990-81e1-b14fb9966476 nodeName:}" failed. No retries permitted until 2026-03-07 08:07:11.616814815 +0000 UTC m=+1088.525981380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ffb7fdc9-854e-4990-81e1-b14fb9966476-cert") pod "frr-k8s-webhook-server-7f989f654f-4sfgk" (UID: "ffb7fdc9-854e-4990-81e1-b14fb9966476") : secret "frr-k8s-webhook-server-cert" not found Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.117053 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-metrics\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.117136 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-reloader\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.117230 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-sockets\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.117531 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-conf\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.122329 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-metrics-certs\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.132216 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh2m7\" (UniqueName: \"kubernetes.io/projected/ffb7fdc9-854e-4990-81e1-b14fb9966476-kube-api-access-vh2m7\") pod \"frr-k8s-webhook-server-7f989f654f-4sfgk\" (UID: \"ffb7fdc9-854e-4990-81e1-b14fb9966476\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.146684 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7brj8\" (UniqueName: \"kubernetes.io/projected/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-kube-api-access-7brj8\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.217289 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlkfs\" (UniqueName: \"kubernetes.io/projected/adfa916b-8977-446f-9387-932788e51e10-kube-api-access-wlkfs\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.217734 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbcfq\" (UniqueName: \"kubernetes.io/projected/193543ae-839d-485e-a238-ae40e69f7b24-kube-api-access-bbcfq\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.217778 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adfa916b-8977-446f-9387-932788e51e10-cert\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.217820 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.217850 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/adfa916b-8977-446f-9387-932788e51e10-metrics-certs\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.217892 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/193543ae-839d-485e-a238-ae40e69f7b24-metallb-excludel2\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: E0307 08:07:11.217929 4761 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 07 08:07:11 crc kubenswrapper[4761]: E0307 08:07:11.217971 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist podName:193543ae-839d-485e-a238-ae40e69f7b24 nodeName:}" failed. No retries permitted until 2026-03-07 08:07:11.717958547 +0000 UTC m=+1088.627125022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist") pod "speaker-75b4z" (UID: "193543ae-839d-485e-a238-ae40e69f7b24") : secret "metallb-memberlist" not found Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.217986 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-metrics-certs\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.218682 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/193543ae-839d-485e-a238-ae40e69f7b24-metallb-excludel2\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.220505 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.221568 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-metrics-certs\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.221662 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/adfa916b-8977-446f-9387-932788e51e10-metrics-certs\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.234258 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adfa916b-8977-446f-9387-932788e51e10-cert\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.243577 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbcfq\" (UniqueName: \"kubernetes.io/projected/193543ae-839d-485e-a238-ae40e69f7b24-kube-api-access-bbcfq\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.244246 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.252545 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlkfs\" (UniqueName: \"kubernetes.io/projected/adfa916b-8977-446f-9387-932788e51e10-kube-api-access-wlkfs\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.363513 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.380803 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.469802 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"6ca7d6c2024e2ac38f6afeabe4220ee2f3aa7580ea4e104dd5663df1d0794422"} Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.623635 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb7fdc9-854e-4990-81e1-b14fb9966476-cert\") pod \"frr-k8s-webhook-server-7f989f654f-4sfgk\" (UID: \"ffb7fdc9-854e-4990-81e1-b14fb9966476\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.630001 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb7fdc9-854e-4990-81e1-b14fb9966476-cert\") pod \"frr-k8s-webhook-server-7f989f654f-4sfgk\" (UID: \"ffb7fdc9-854e-4990-81e1-b14fb9966476\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.725314 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: E0307 08:07:11.725507 4761 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 07 08:07:11 crc kubenswrapper[4761]: E0307 08:07:11.725589 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist podName:193543ae-839d-485e-a238-ae40e69f7b24 nodeName:}" failed. No retries permitted until 2026-03-07 08:07:12.725572371 +0000 UTC m=+1089.634738846 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist") pod "speaker-75b4z" (UID: "193543ae-839d-485e-a238-ae40e69f7b24") : secret "metallb-memberlist" not found Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.848882 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-m2tp4"] Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.853709 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.246994 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk"] Mar 07 08:07:12 crc kubenswrapper[4761]: W0307 08:07:12.251448 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffb7fdc9_854e_4990_81e1_b14fb9966476.slice/crio-a9395b78652ed8d64802e6ec4b3918e19eecdb391ab70812cc44793a18afb8d2 WatchSource:0}: Error finding container a9395b78652ed8d64802e6ec4b3918e19eecdb391ab70812cc44793a18afb8d2: Status 404 returned error can't find the container with id a9395b78652ed8d64802e6ec4b3918e19eecdb391ab70812cc44793a18afb8d2 Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.488996 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-m2tp4" event={"ID":"adfa916b-8977-446f-9387-932788e51e10","Type":"ContainerStarted","Data":"26f2a62dc043309dc4ef24f20b7a3d1d584762998f97feb9081f54948f509d13"} Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.489387 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-m2tp4" event={"ID":"adfa916b-8977-446f-9387-932788e51e10","Type":"ContainerStarted","Data":"f5996be1025b4f5f0291b64b79b5aca0f48f5e117c9cede01f26742efeeaacd6"} Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.489405 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-m2tp4" event={"ID":"adfa916b-8977-446f-9387-932788e51e10","Type":"ContainerStarted","Data":"514a8b636fc0e2a21d46121e49e840e704625c6add5e9a82af7b58492bf0d464"} Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.489778 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.491342 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" event={"ID":"ffb7fdc9-854e-4990-81e1-b14fb9966476","Type":"ContainerStarted","Data":"a9395b78652ed8d64802e6ec4b3918e19eecdb391ab70812cc44793a18afb8d2"} Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.741700 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.751487 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.845810 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-75b4z" Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.522020 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-75b4z" event={"ID":"193543ae-839d-485e-a238-ae40e69f7b24","Type":"ContainerStarted","Data":"b2d7d5264a3c0071ab78c6709d6e66c0d2256b86fa1c3c90f91798ffb51d94fe"} Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.522421 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-75b4z" event={"ID":"193543ae-839d-485e-a238-ae40e69f7b24","Type":"ContainerStarted","Data":"728c9d850e2981887e404b1c4d33ab7b98374d312289c774f73b86896ee865e6"} Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.522444 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-75b4z" event={"ID":"193543ae-839d-485e-a238-ae40e69f7b24","Type":"ContainerStarted","Data":"bc4b720a2325559768244b2d879acc092492bafa3e146f97402b8eef27c5cd47"} Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.522727 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-75b4z" Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.548026 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-75b4z" podStartSLOduration=2.548008954 podStartE2EDuration="2.548008954s" podCreationTimestamp="2026-03-07 08:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:07:13.543672468 +0000 UTC m=+1090.452838943" watchObservedRunningTime="2026-03-07 08:07:13.548008954 +0000 UTC m=+1090.457175429" Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.549406 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-m2tp4" podStartSLOduration=2.549397818 podStartE2EDuration="2.549397818s" podCreationTimestamp="2026-03-07 08:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:07:12.507868618 +0000 UTC m=+1089.417035113" watchObservedRunningTime="2026-03-07 08:07:13.549397818 +0000 UTC m=+1090.458564293" Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.769074 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.769123 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:07:19 crc kubenswrapper[4761]: I0307 08:07:19.567832 4761 generic.go:334] "Generic (PLEG): container finished" podID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerID="3d0f045c081e057419f88ee695151bb76c7e4e3b87356b225a90c35c116d68e0" exitCode=0 Mar 07 08:07:19 crc kubenswrapper[4761]: I0307 08:07:19.568111 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerDied","Data":"3d0f045c081e057419f88ee695151bb76c7e4e3b87356b225a90c35c116d68e0"} Mar 07 08:07:19 crc kubenswrapper[4761]: I0307 08:07:19.571327 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" event={"ID":"ffb7fdc9-854e-4990-81e1-b14fb9966476","Type":"ContainerStarted","Data":"5a798ce3c346c7460eb28cf50d47ec25f3c63e67d83bae5f4cd78e85382c7f5c"} Mar 07 08:07:19 crc kubenswrapper[4761]: I0307 08:07:19.572110 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:19 crc kubenswrapper[4761]: I0307 08:07:19.626382 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" podStartSLOduration=3.202127173 podStartE2EDuration="9.626352256s" podCreationTimestamp="2026-03-07 08:07:10 +0000 UTC" firstStartedPulling="2026-03-07 08:07:12.25507145 +0000 UTC m=+1089.164237925" lastFinishedPulling="2026-03-07 08:07:18.679296523 +0000 UTC m=+1095.588463008" observedRunningTime="2026-03-07 08:07:19.618683659 +0000 UTC m=+1096.527850184" watchObservedRunningTime="2026-03-07 08:07:19.626352256 +0000 UTC m=+1096.535518761" Mar 07 08:07:20 crc kubenswrapper[4761]: I0307 08:07:20.579945 4761 generic.go:334] "Generic (PLEG): container finished" podID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerID="25c103a3ddab04b59ec0442c4de60e411db7870ac9a0cbdc16f8b4684f92c572" exitCode=0 Mar 07 08:07:20 crc kubenswrapper[4761]: I0307 08:07:20.580029 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerDied","Data":"25c103a3ddab04b59ec0442c4de60e411db7870ac9a0cbdc16f8b4684f92c572"} Mar 07 08:07:21 crc kubenswrapper[4761]: I0307 08:07:21.368541 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:21 crc kubenswrapper[4761]: I0307 08:07:21.590365 4761 generic.go:334] "Generic (PLEG): container finished" podID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerID="49c0b1253422de8ea1890365f597c1c0fd1d1a5e9510d045fd3098329cdbe227" exitCode=0 Mar 07 08:07:21 crc kubenswrapper[4761]: I0307 08:07:21.590573 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerDied","Data":"49c0b1253422de8ea1890365f597c1c0fd1d1a5e9510d045fd3098329cdbe227"} Mar 07 08:07:22 crc kubenswrapper[4761]: I0307 08:07:22.603504 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"daaf743a2d02babcd81f5c07e1755c76f91d8cb9ee58027a6f833e3f267708e8"} Mar 07 08:07:22 crc kubenswrapper[4761]: I0307 08:07:22.603886 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"31c3806272cb59a770f3f00af1f255691ef3996086b58fe454192834408be9fd"} Mar 07 08:07:22 crc kubenswrapper[4761]: I0307 08:07:22.603905 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"125a8807630d45b1445c93435324d686658873352d3fbb58e13cb3d80d9c5e00"} Mar 07 08:07:22 crc kubenswrapper[4761]: I0307 08:07:22.603919 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"ff300f5efb12f93334587e3d904527e3fa66d7acbf968e46eb5467420491f1c9"} Mar 07 08:07:22 crc kubenswrapper[4761]: I0307 08:07:22.603933 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"734ddf0b9f61b47fe6555044a7cc84fd3ee785ebb8420becbfe23b851f2d2a4b"} Mar 07 08:07:23 crc kubenswrapper[4761]: I0307 08:07:23.622780 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"e73619ce2c5813aeaa6ccaba09ca8b204ef9eb6c33e29c9dd9b93ddef64bd992"} Mar 07 08:07:23 crc kubenswrapper[4761]: I0307 08:07:23.622975 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:23 crc kubenswrapper[4761]: I0307 08:07:23.645094 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-lzrcd" podStartSLOduration=6.371155832 podStartE2EDuration="13.645075488s" podCreationTimestamp="2026-03-07 08:07:10 +0000 UTC" firstStartedPulling="2026-03-07 08:07:11.380576491 +0000 UTC m=+1088.289742966" lastFinishedPulling="2026-03-07 08:07:18.654496137 +0000 UTC m=+1095.563662622" observedRunningTime="2026-03-07 08:07:23.641824929 +0000 UTC m=+1100.550991424" watchObservedRunningTime="2026-03-07 08:07:23.645075488 +0000 UTC m=+1100.554241963" Mar 07 08:07:26 crc kubenswrapper[4761]: I0307 08:07:26.245498 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:26 crc kubenswrapper[4761]: I0307 08:07:26.302763 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:31 crc kubenswrapper[4761]: I0307 08:07:31.247996 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:31 crc kubenswrapper[4761]: I0307 08:07:31.860022 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:32 crc kubenswrapper[4761]: I0307 08:07:32.849277 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-75b4z" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.419269 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rvt8q"] Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.420766 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rvt8q" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.424003 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-znj4z" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.424069 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.424661 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.463032 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rvt8q"] Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.539460 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wll2b\" (UniqueName: \"kubernetes.io/projected/878f8414-9fcd-4c4f-ae22-d24d32274c54-kube-api-access-wll2b\") pod \"openstack-operator-index-rvt8q\" (UID: \"878f8414-9fcd-4c4f-ae22-d24d32274c54\") " pod="openstack-operators/openstack-operator-index-rvt8q" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.641045 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wll2b\" (UniqueName: \"kubernetes.io/projected/878f8414-9fcd-4c4f-ae22-d24d32274c54-kube-api-access-wll2b\") pod \"openstack-operator-index-rvt8q\" (UID: \"878f8414-9fcd-4c4f-ae22-d24d32274c54\") " pod="openstack-operators/openstack-operator-index-rvt8q" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.665396 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wll2b\" (UniqueName: \"kubernetes.io/projected/878f8414-9fcd-4c4f-ae22-d24d32274c54-kube-api-access-wll2b\") pod \"openstack-operator-index-rvt8q\" (UID: \"878f8414-9fcd-4c4f-ae22-d24d32274c54\") " pod="openstack-operators/openstack-operator-index-rvt8q" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.752152 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rvt8q" Mar 07 08:07:36 crc kubenswrapper[4761]: I0307 08:07:36.154606 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rvt8q"] Mar 07 08:07:36 crc kubenswrapper[4761]: I0307 08:07:36.742182 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rvt8q" event={"ID":"878f8414-9fcd-4c4f-ae22-d24d32274c54","Type":"ContainerStarted","Data":"2dcd057d5760e805e83fd90c948d5c1dbe6e992e510d0ea68bbe54e0b6676612"} Mar 07 08:07:37 crc kubenswrapper[4761]: I0307 08:07:37.795665 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rvt8q"] Mar 07 08:07:38 crc kubenswrapper[4761]: I0307 08:07:38.200834 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-j8w2n"] Mar 07 08:07:38 crc kubenswrapper[4761]: I0307 08:07:38.203421 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:38 crc kubenswrapper[4761]: I0307 08:07:38.222202 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j8w2n"] Mar 07 08:07:38 crc kubenswrapper[4761]: I0307 08:07:38.293238 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f2xm\" (UniqueName: \"kubernetes.io/projected/69902561-929c-428a-8dab-7a9a91fb3084-kube-api-access-2f2xm\") pod \"openstack-operator-index-j8w2n\" (UID: \"69902561-929c-428a-8dab-7a9a91fb3084\") " pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:38 crc kubenswrapper[4761]: I0307 08:07:38.395167 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f2xm\" (UniqueName: \"kubernetes.io/projected/69902561-929c-428a-8dab-7a9a91fb3084-kube-api-access-2f2xm\") pod \"openstack-operator-index-j8w2n\" (UID: \"69902561-929c-428a-8dab-7a9a91fb3084\") " pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:38 crc kubenswrapper[4761]: I0307 08:07:38.413640 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f2xm\" (UniqueName: \"kubernetes.io/projected/69902561-929c-428a-8dab-7a9a91fb3084-kube-api-access-2f2xm\") pod \"openstack-operator-index-j8w2n\" (UID: \"69902561-929c-428a-8dab-7a9a91fb3084\") " pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:38 crc kubenswrapper[4761]: I0307 08:07:38.550471 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:39 crc kubenswrapper[4761]: I0307 08:07:39.203894 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j8w2n"] Mar 07 08:07:39 crc kubenswrapper[4761]: I0307 08:07:39.774662 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rvt8q" event={"ID":"878f8414-9fcd-4c4f-ae22-d24d32274c54","Type":"ContainerStarted","Data":"eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839"} Mar 07 08:07:39 crc kubenswrapper[4761]: I0307 08:07:39.774994 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-rvt8q" podUID="878f8414-9fcd-4c4f-ae22-d24d32274c54" containerName="registry-server" containerID="cri-o://eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839" gracePeriod=2 Mar 07 08:07:39 crc kubenswrapper[4761]: I0307 08:07:39.781236 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j8w2n" event={"ID":"69902561-929c-428a-8dab-7a9a91fb3084","Type":"ContainerStarted","Data":"57eb9cc71daa28f8459959988f4595be709d1473bb27f644973e824823e0d9a3"} Mar 07 08:07:39 crc kubenswrapper[4761]: I0307 08:07:39.781282 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j8w2n" event={"ID":"69902561-929c-428a-8dab-7a9a91fb3084","Type":"ContainerStarted","Data":"b9e43063966f3e8bd51af7d3d90991961f6bd96a5cb13d1a7300457e25c10184"} Mar 07 08:07:39 crc kubenswrapper[4761]: I0307 08:07:39.806355 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rvt8q" podStartSLOduration=2.251602929 podStartE2EDuration="4.806332957s" podCreationTimestamp="2026-03-07 08:07:35 +0000 UTC" firstStartedPulling="2026-03-07 08:07:36.175633046 +0000 UTC m=+1113.084799521" lastFinishedPulling="2026-03-07 08:07:38.730363074 +0000 UTC m=+1115.639529549" observedRunningTime="2026-03-07 08:07:39.799971331 +0000 UTC m=+1116.709137816" watchObservedRunningTime="2026-03-07 08:07:39.806332957 +0000 UTC m=+1116.715499452" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.273268 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rvt8q" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.297291 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-j8w2n" podStartSLOduration=2.234187021 podStartE2EDuration="2.297261263s" podCreationTimestamp="2026-03-07 08:07:38 +0000 UTC" firstStartedPulling="2026-03-07 08:07:39.21065724 +0000 UTC m=+1116.119823715" lastFinishedPulling="2026-03-07 08:07:39.273731482 +0000 UTC m=+1116.182897957" observedRunningTime="2026-03-07 08:07:39.818263628 +0000 UTC m=+1116.727430103" watchObservedRunningTime="2026-03-07 08:07:40.297261263 +0000 UTC m=+1117.206427758" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.395212 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wll2b\" (UniqueName: \"kubernetes.io/projected/878f8414-9fcd-4c4f-ae22-d24d32274c54-kube-api-access-wll2b\") pod \"878f8414-9fcd-4c4f-ae22-d24d32274c54\" (UID: \"878f8414-9fcd-4c4f-ae22-d24d32274c54\") " Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.403532 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878f8414-9fcd-4c4f-ae22-d24d32274c54-kube-api-access-wll2b" (OuterVolumeSpecName: "kube-api-access-wll2b") pod "878f8414-9fcd-4c4f-ae22-d24d32274c54" (UID: "878f8414-9fcd-4c4f-ae22-d24d32274c54"). InnerVolumeSpecName "kube-api-access-wll2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.497030 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wll2b\" (UniqueName: \"kubernetes.io/projected/878f8414-9fcd-4c4f-ae22-d24d32274c54-kube-api-access-wll2b\") on node \"crc\" DevicePath \"\"" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.795534 4761 generic.go:334] "Generic (PLEG): container finished" podID="878f8414-9fcd-4c4f-ae22-d24d32274c54" containerID="eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839" exitCode=0 Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.795634 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rvt8q" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.795655 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rvt8q" event={"ID":"878f8414-9fcd-4c4f-ae22-d24d32274c54","Type":"ContainerDied","Data":"eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839"} Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.795778 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rvt8q" event={"ID":"878f8414-9fcd-4c4f-ae22-d24d32274c54","Type":"ContainerDied","Data":"2dcd057d5760e805e83fd90c948d5c1dbe6e992e510d0ea68bbe54e0b6676612"} Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.795845 4761 scope.go:117] "RemoveContainer" containerID="eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.826328 4761 scope.go:117] "RemoveContainer" containerID="eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839" Mar 07 08:07:40 crc kubenswrapper[4761]: E0307 08:07:40.827097 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839\": container with ID starting with eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839 not found: ID does not exist" containerID="eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.827139 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839"} err="failed to get container status \"eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839\": rpc error: code = NotFound desc = could not find container \"eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839\": container with ID starting with eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839 not found: ID does not exist" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.860693 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rvt8q"] Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.867871 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-rvt8q"] Mar 07 08:07:41 crc kubenswrapper[4761]: I0307 08:07:41.721914 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="878f8414-9fcd-4c4f-ae22-d24d32274c54" path="/var/lib/kubelet/pods/878f8414-9fcd-4c4f-ae22-d24d32274c54/volumes" Mar 07 08:07:43 crc kubenswrapper[4761]: I0307 08:07:43.768626 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:07:43 crc kubenswrapper[4761]: I0307 08:07:43.768854 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:07:48 crc kubenswrapper[4761]: I0307 08:07:48.552160 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:48 crc kubenswrapper[4761]: I0307 08:07:48.552831 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:48 crc kubenswrapper[4761]: I0307 08:07:48.609883 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:48 crc kubenswrapper[4761]: I0307 08:07:48.896915 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.643387 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv"] Mar 07 08:07:50 crc kubenswrapper[4761]: E0307 08:07:50.643894 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878f8414-9fcd-4c4f-ae22-d24d32274c54" containerName="registry-server" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.643906 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="878f8414-9fcd-4c4f-ae22-d24d32274c54" containerName="registry-server" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.644077 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="878f8414-9fcd-4c4f-ae22-d24d32274c54" containerName="registry-server" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.645237 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.647369 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-ldcmv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.658355 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv"] Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.691233 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ltg5\" (UniqueName: \"kubernetes.io/projected/9c633896-8e1e-4395-afb6-a94b40ef9e66-kube-api-access-8ltg5\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.691314 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-bundle\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.691401 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-util\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.792816 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ltg5\" (UniqueName: \"kubernetes.io/projected/9c633896-8e1e-4395-afb6-a94b40ef9e66-kube-api-access-8ltg5\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.792881 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-bundle\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.792942 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-util\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.793416 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-util\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.793602 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-bundle\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.813476 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ltg5\" (UniqueName: \"kubernetes.io/projected/9c633896-8e1e-4395-afb6-a94b40ef9e66-kube-api-access-8ltg5\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.960167 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:51 crc kubenswrapper[4761]: I0307 08:07:51.449870 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv"] Mar 07 08:07:51 crc kubenswrapper[4761]: I0307 08:07:51.896923 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" event={"ID":"9c633896-8e1e-4395-afb6-a94b40ef9e66","Type":"ContainerStarted","Data":"c01f55c44f392bcc95f981c296478cca98f863aa245ee79b9e8f777a01bd67d3"} Mar 07 08:07:51 crc kubenswrapper[4761]: I0307 08:07:51.897284 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" event={"ID":"9c633896-8e1e-4395-afb6-a94b40ef9e66","Type":"ContainerStarted","Data":"f9ec268e6712ec08df52565bfadd0fbbeb0a151a7ebeda092744a8a62dfa53d6"} Mar 07 08:07:52 crc kubenswrapper[4761]: I0307 08:07:52.909056 4761 generic.go:334] "Generic (PLEG): container finished" podID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerID="c01f55c44f392bcc95f981c296478cca98f863aa245ee79b9e8f777a01bd67d3" exitCode=0 Mar 07 08:07:52 crc kubenswrapper[4761]: I0307 08:07:52.909106 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" event={"ID":"9c633896-8e1e-4395-afb6-a94b40ef9e66","Type":"ContainerDied","Data":"c01f55c44f392bcc95f981c296478cca98f863aa245ee79b9e8f777a01bd67d3"} Mar 07 08:07:54 crc kubenswrapper[4761]: I0307 08:07:54.925252 4761 generic.go:334] "Generic (PLEG): container finished" podID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerID="20c6f1c2eb6b3b0bbd521248ce51bd1a94f916f9c63b29c7b3a03e133ac93af1" exitCode=0 Mar 07 08:07:54 crc kubenswrapper[4761]: I0307 08:07:54.925304 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" event={"ID":"9c633896-8e1e-4395-afb6-a94b40ef9e66","Type":"ContainerDied","Data":"20c6f1c2eb6b3b0bbd521248ce51bd1a94f916f9c63b29c7b3a03e133ac93af1"} Mar 07 08:07:55 crc kubenswrapper[4761]: I0307 08:07:55.936711 4761 generic.go:334] "Generic (PLEG): container finished" podID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerID="74deaf143c552cb0fda67e114dabbae85fb23d3deac7b32aca6df3518c0fa4a5" exitCode=0 Mar 07 08:07:55 crc kubenswrapper[4761]: I0307 08:07:55.936756 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" event={"ID":"9c633896-8e1e-4395-afb6-a94b40ef9e66","Type":"ContainerDied","Data":"74deaf143c552cb0fda67e114dabbae85fb23d3deac7b32aca6df3518c0fa4a5"} Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.272670 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.415753 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-bundle\") pod \"9c633896-8e1e-4395-afb6-a94b40ef9e66\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.415819 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ltg5\" (UniqueName: \"kubernetes.io/projected/9c633896-8e1e-4395-afb6-a94b40ef9e66-kube-api-access-8ltg5\") pod \"9c633896-8e1e-4395-afb6-a94b40ef9e66\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.415919 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-util\") pod \"9c633896-8e1e-4395-afb6-a94b40ef9e66\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.419312 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-bundle" (OuterVolumeSpecName: "bundle") pod "9c633896-8e1e-4395-afb6-a94b40ef9e66" (UID: "9c633896-8e1e-4395-afb6-a94b40ef9e66"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.430006 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c633896-8e1e-4395-afb6-a94b40ef9e66-kube-api-access-8ltg5" (OuterVolumeSpecName: "kube-api-access-8ltg5") pod "9c633896-8e1e-4395-afb6-a94b40ef9e66" (UID: "9c633896-8e1e-4395-afb6-a94b40ef9e66"). InnerVolumeSpecName "kube-api-access-8ltg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.447258 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-util" (OuterVolumeSpecName: "util") pod "9c633896-8e1e-4395-afb6-a94b40ef9e66" (UID: "9c633896-8e1e-4395-afb6-a94b40ef9e66"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.517782 4761 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.517813 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ltg5\" (UniqueName: \"kubernetes.io/projected/9c633896-8e1e-4395-afb6-a94b40ef9e66-kube-api-access-8ltg5\") on node \"crc\" DevicePath \"\"" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.517824 4761 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-util\") on node \"crc\" DevicePath \"\"" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.958567 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" event={"ID":"9c633896-8e1e-4395-afb6-a94b40ef9e66","Type":"ContainerDied","Data":"f9ec268e6712ec08df52565bfadd0fbbeb0a151a7ebeda092744a8a62dfa53d6"} Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.958637 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9ec268e6712ec08df52565bfadd0fbbeb0a151a7ebeda092744a8a62dfa53d6" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.958836 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.154292 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547848-qbkn8"] Mar 07 08:08:00 crc kubenswrapper[4761]: E0307 08:08:00.155298 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerName="pull" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.155332 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerName="pull" Mar 07 08:08:00 crc kubenswrapper[4761]: E0307 08:08:00.155406 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerName="util" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.155425 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerName="util" Mar 07 08:08:00 crc kubenswrapper[4761]: E0307 08:08:00.155477 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerName="extract" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.155495 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerName="extract" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.155944 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerName="extract" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.157260 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547848-qbkn8" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.159421 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.159843 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.161017 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.166659 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547848-qbkn8"] Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.268892 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nqww\" (UniqueName: \"kubernetes.io/projected/91d7b5c4-c016-498d-bc33-0b7c52cb7504-kube-api-access-9nqww\") pod \"auto-csr-approver-29547848-qbkn8\" (UID: \"91d7b5c4-c016-498d-bc33-0b7c52cb7504\") " pod="openshift-infra/auto-csr-approver-29547848-qbkn8" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.370201 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nqww\" (UniqueName: \"kubernetes.io/projected/91d7b5c4-c016-498d-bc33-0b7c52cb7504-kube-api-access-9nqww\") pod \"auto-csr-approver-29547848-qbkn8\" (UID: \"91d7b5c4-c016-498d-bc33-0b7c52cb7504\") " pod="openshift-infra/auto-csr-approver-29547848-qbkn8" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.388465 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nqww\" (UniqueName: \"kubernetes.io/projected/91d7b5c4-c016-498d-bc33-0b7c52cb7504-kube-api-access-9nqww\") pod \"auto-csr-approver-29547848-qbkn8\" (UID: \"91d7b5c4-c016-498d-bc33-0b7c52cb7504\") " pod="openshift-infra/auto-csr-approver-29547848-qbkn8" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.489239 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547848-qbkn8" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.914342 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547848-qbkn8"] Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.985922 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547848-qbkn8" event={"ID":"91d7b5c4-c016-498d-bc33-0b7c52cb7504","Type":"ContainerStarted","Data":"43410b5cd79fc8df4d6692daf6f34ac67633886b2ec5a8a8b79af48c2b60ae95"} Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.166785 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8"] Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.168231 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.173648 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-vhfp2" Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.224992 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8"] Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.301355 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76h2k\" (UniqueName: \"kubernetes.io/projected/b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6-kube-api-access-76h2k\") pod \"openstack-operator-controller-init-6bfd49cd44-m98b8\" (UID: \"b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6\") " pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.403268 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76h2k\" (UniqueName: \"kubernetes.io/projected/b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6-kube-api-access-76h2k\") pod \"openstack-operator-controller-init-6bfd49cd44-m98b8\" (UID: \"b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6\") " pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.420674 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76h2k\" (UniqueName: \"kubernetes.io/projected/b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6-kube-api-access-76h2k\") pod \"openstack-operator-controller-init-6bfd49cd44-m98b8\" (UID: \"b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6\") " pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.486396 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.936574 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8"] Mar 07 08:08:03 crc kubenswrapper[4761]: I0307 08:08:03.014982 4761 generic.go:334] "Generic (PLEG): container finished" podID="91d7b5c4-c016-498d-bc33-0b7c52cb7504" containerID="90780c6769e50eb25ac4414322be19d0fb66add72262a799352d6b815dedb419" exitCode=0 Mar 07 08:08:03 crc kubenswrapper[4761]: I0307 08:08:03.015079 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547848-qbkn8" event={"ID":"91d7b5c4-c016-498d-bc33-0b7c52cb7504","Type":"ContainerDied","Data":"90780c6769e50eb25ac4414322be19d0fb66add72262a799352d6b815dedb419"} Mar 07 08:08:03 crc kubenswrapper[4761]: I0307 08:08:03.020261 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" event={"ID":"b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6","Type":"ContainerStarted","Data":"ce8c6b24146c7e2a880911b705092c66a9fad60e3f05067b432e6f41427946f3"} Mar 07 08:08:04 crc kubenswrapper[4761]: I0307 08:08:04.388667 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547848-qbkn8" Mar 07 08:08:04 crc kubenswrapper[4761]: I0307 08:08:04.546463 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nqww\" (UniqueName: \"kubernetes.io/projected/91d7b5c4-c016-498d-bc33-0b7c52cb7504-kube-api-access-9nqww\") pod \"91d7b5c4-c016-498d-bc33-0b7c52cb7504\" (UID: \"91d7b5c4-c016-498d-bc33-0b7c52cb7504\") " Mar 07 08:08:04 crc kubenswrapper[4761]: I0307 08:08:04.552625 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d7b5c4-c016-498d-bc33-0b7c52cb7504-kube-api-access-9nqww" (OuterVolumeSpecName: "kube-api-access-9nqww") pod "91d7b5c4-c016-498d-bc33-0b7c52cb7504" (UID: "91d7b5c4-c016-498d-bc33-0b7c52cb7504"). InnerVolumeSpecName "kube-api-access-9nqww". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:08:04 crc kubenswrapper[4761]: I0307 08:08:04.648103 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nqww\" (UniqueName: \"kubernetes.io/projected/91d7b5c4-c016-498d-bc33-0b7c52cb7504-kube-api-access-9nqww\") on node \"crc\" DevicePath \"\"" Mar 07 08:08:05 crc kubenswrapper[4761]: I0307 08:08:05.050257 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547848-qbkn8" event={"ID":"91d7b5c4-c016-498d-bc33-0b7c52cb7504","Type":"ContainerDied","Data":"43410b5cd79fc8df4d6692daf6f34ac67633886b2ec5a8a8b79af48c2b60ae95"} Mar 07 08:08:05 crc kubenswrapper[4761]: I0307 08:08:05.050555 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43410b5cd79fc8df4d6692daf6f34ac67633886b2ec5a8a8b79af48c2b60ae95" Mar 07 08:08:05 crc kubenswrapper[4761]: I0307 08:08:05.050307 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547848-qbkn8" Mar 07 08:08:05 crc kubenswrapper[4761]: I0307 08:08:05.459007 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547842-jnfnp"] Mar 07 08:08:05 crc kubenswrapper[4761]: I0307 08:08:05.466980 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547842-jnfnp"] Mar 07 08:08:05 crc kubenswrapper[4761]: I0307 08:08:05.715867 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a" path="/var/lib/kubelet/pods/2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a/volumes" Mar 07 08:08:08 crc kubenswrapper[4761]: I0307 08:08:08.089262 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" event={"ID":"b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6","Type":"ContainerStarted","Data":"8ab0bef6047f2d6acad570984c54ee9966c807d684743650af6d850b8efe16a7"} Mar 07 08:08:08 crc kubenswrapper[4761]: I0307 08:08:08.090086 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" Mar 07 08:08:08 crc kubenswrapper[4761]: I0307 08:08:08.115518 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" podStartSLOduration=2.221457069 podStartE2EDuration="6.115500783s" podCreationTimestamp="2026-03-07 08:08:02 +0000 UTC" firstStartedPulling="2026-03-07 08:08:02.94183734 +0000 UTC m=+1139.851003815" lastFinishedPulling="2026-03-07 08:08:06.835881054 +0000 UTC m=+1143.745047529" observedRunningTime="2026-03-07 08:08:08.113793862 +0000 UTC m=+1145.022960357" watchObservedRunningTime="2026-03-07 08:08:08.115500783 +0000 UTC m=+1145.024667258" Mar 07 08:08:12 crc kubenswrapper[4761]: I0307 08:08:12.488486 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" Mar 07 08:08:13 crc kubenswrapper[4761]: I0307 08:08:13.768640 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:08:13 crc kubenswrapper[4761]: I0307 08:08:13.769763 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:08:13 crc kubenswrapper[4761]: I0307 08:08:13.769913 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:08:13 crc kubenswrapper[4761]: I0307 08:08:13.770742 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aca69e929765f604d6be340ee9bf2395b19b14b626bf0c5263eb403497f029cf"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:08:13 crc kubenswrapper[4761]: I0307 08:08:13.770984 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://aca69e929765f604d6be340ee9bf2395b19b14b626bf0c5263eb403497f029cf" gracePeriod=600 Mar 07 08:08:14 crc kubenswrapper[4761]: I0307 08:08:14.152037 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="aca69e929765f604d6be340ee9bf2395b19b14b626bf0c5263eb403497f029cf" exitCode=0 Mar 07 08:08:14 crc kubenswrapper[4761]: I0307 08:08:14.152105 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"aca69e929765f604d6be340ee9bf2395b19b14b626bf0c5263eb403497f029cf"} Mar 07 08:08:14 crc kubenswrapper[4761]: I0307 08:08:14.152337 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"c720defb28c06a1aa2b8b26acca0b7c32fc87b6223c85d1c22d3f2b9565b9ee4"} Mar 07 08:08:14 crc kubenswrapper[4761]: I0307 08:08:14.152358 4761 scope.go:117] "RemoveContainer" containerID="c1d761b7f5e7692b9893671098d197b8b035ee46f61a8e0511bcc06bc73f8c8f" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.890888 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q"] Mar 07 08:08:31 crc kubenswrapper[4761]: E0307 08:08:31.892085 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d7b5c4-c016-498d-bc33-0b7c52cb7504" containerName="oc" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.892102 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d7b5c4-c016-498d-bc33-0b7c52cb7504" containerName="oc" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.892296 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d7b5c4-c016-498d-bc33-0b7c52cb7504" containerName="oc" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.893007 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.895888 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-hfbcn" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.905138 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn"] Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.906111 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.907863 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qkgbx" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.917374 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22"] Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.918489 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.932813 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-v62xn" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.934746 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q"] Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.952518 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn"] Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.958963 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22"] Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.969551 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh"] Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.970655 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.972340 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vwg9r" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.995704 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz"] Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.996649 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.000915 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-k4lfk" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.004519 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.010786 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.011701 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.018590 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-2j7kr" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.029026 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.030535 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.034571 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.046756 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm7tj\" (UniqueName: \"kubernetes.io/projected/9554e552-2329-4e93-835e-9dbcad7b7519-kube-api-access-sm7tj\") pod \"cinder-operator-controller-manager-55d77d7b5c-vx8wn\" (UID: \"9554e552-2329-4e93-835e-9dbcad7b7519\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.046814 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnh67\" (UniqueName: \"kubernetes.io/projected/bf4af368-4dee-4a4a-8c43-fd7991ac3366-kube-api-access-lnh67\") pod \"barbican-operator-controller-manager-6db6876945-wvt5q\" (UID: \"bf4af368-4dee-4a4a-8c43-fd7991ac3366\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.046934 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m9r8\" (UniqueName: \"kubernetes.io/projected/a4bc9370-c64d-4e5e-a0bd-70297abb8c0d-kube-api-access-9m9r8\") pod \"glance-operator-controller-manager-64db6967f8-vv8sh\" (UID: \"a4bc9370-c64d-4e5e-a0bd-70297abb8c0d\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.046969 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lvwt\" (UniqueName: \"kubernetes.io/projected/90a2f442-aea1-44ac-bbb8-ba58c0969806-kube-api-access-8lvwt\") pod \"designate-operator-controller-manager-5d87c9d997-mxh22\" (UID: \"90a2f442-aea1-44ac-bbb8-ba58c0969806\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.053257 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.053524 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-g6m7j" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.054513 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.068784 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.072604 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.074883 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.085772 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.086504 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-xmsc8" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.087024 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.091080 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ljm65" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.140533 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.149847 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp8v6\" (UniqueName: \"kubernetes.io/projected/0ce5a055-df90-4071-a5cf-f7361e01e5fe-kube-api-access-jp8v6\") pod \"heat-operator-controller-manager-cf99c678f-pnxcz\" (UID: \"0ce5a055-df90-4071-a5cf-f7361e01e5fe\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.149908 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlqt5\" (UniqueName: \"kubernetes.io/projected/3b477f52-57ee-4037-af3a-fa987453bdf2-kube-api-access-mlqt5\") pod \"horizon-operator-controller-manager-78bc7f9bd9-9wqmf\" (UID: \"3b477f52-57ee-4037-af3a-fa987453bdf2\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.149958 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvms4\" (UniqueName: \"kubernetes.io/projected/6bdda9de-4711-4fbc-b9d2-5f867691450a-kube-api-access-vvms4\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.149987 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m9r8\" (UniqueName: \"kubernetes.io/projected/a4bc9370-c64d-4e5e-a0bd-70297abb8c0d-kube-api-access-9m9r8\") pod \"glance-operator-controller-manager-64db6967f8-vv8sh\" (UID: \"a4bc9370-c64d-4e5e-a0bd-70297abb8c0d\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.150017 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lvwt\" (UniqueName: \"kubernetes.io/projected/90a2f442-aea1-44ac-bbb8-ba58c0969806-kube-api-access-8lvwt\") pod \"designate-operator-controller-manager-5d87c9d997-mxh22\" (UID: \"90a2f442-aea1-44ac-bbb8-ba58c0969806\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.150053 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.150092 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qhgd\" (UniqueName: \"kubernetes.io/projected/9dcfc7f8-35e7-4fab-bb7a-c900caf10641-kube-api-access-7qhgd\") pod \"ironic-operator-controller-manager-545456dc4-5gtdw\" (UID: \"9dcfc7f8-35e7-4fab-bb7a-c900caf10641\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.150159 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm7tj\" (UniqueName: \"kubernetes.io/projected/9554e552-2329-4e93-835e-9dbcad7b7519-kube-api-access-sm7tj\") pod \"cinder-operator-controller-manager-55d77d7b5c-vx8wn\" (UID: \"9554e552-2329-4e93-835e-9dbcad7b7519\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.150501 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnh67\" (UniqueName: \"kubernetes.io/projected/bf4af368-4dee-4a4a-8c43-fd7991ac3366-kube-api-access-lnh67\") pod \"barbican-operator-controller-manager-6db6876945-wvt5q\" (UID: \"bf4af368-4dee-4a4a-8c43-fd7991ac3366\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.174767 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.193775 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lvwt\" (UniqueName: \"kubernetes.io/projected/90a2f442-aea1-44ac-bbb8-ba58c0969806-kube-api-access-8lvwt\") pod \"designate-operator-controller-manager-5d87c9d997-mxh22\" (UID: \"90a2f442-aea1-44ac-bbb8-ba58c0969806\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.196448 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnh67\" (UniqueName: \"kubernetes.io/projected/bf4af368-4dee-4a4a-8c43-fd7991ac3366-kube-api-access-lnh67\") pod \"barbican-operator-controller-manager-6db6876945-wvt5q\" (UID: \"bf4af368-4dee-4a4a-8c43-fd7991ac3366\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.196531 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-bh54b"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.200403 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.202170 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-bh54b"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.204399 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm7tj\" (UniqueName: \"kubernetes.io/projected/9554e552-2329-4e93-835e-9dbcad7b7519-kube-api-access-sm7tj\") pod \"cinder-operator-controller-manager-55d77d7b5c-vx8wn\" (UID: \"9554e552-2329-4e93-835e-9dbcad7b7519\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.208398 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-kc2xh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.209134 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m9r8\" (UniqueName: \"kubernetes.io/projected/a4bc9370-c64d-4e5e-a0bd-70297abb8c0d-kube-api-access-9m9r8\") pod \"glance-operator-controller-manager-64db6967f8-vv8sh\" (UID: \"a4bc9370-c64d-4e5e-a0bd-70297abb8c0d\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.219217 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.232311 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.249348 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.252596 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.252655 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qhgd\" (UniqueName: \"kubernetes.io/projected/9dcfc7f8-35e7-4fab-bb7a-c900caf10641-kube-api-access-7qhgd\") pod \"ironic-operator-controller-manager-545456dc4-5gtdw\" (UID: \"9dcfc7f8-35e7-4fab-bb7a-c900caf10641\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.252732 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5k2m\" (UniqueName: \"kubernetes.io/projected/baefa6a4-53d3-4158-a74f-87c9b766d760-kube-api-access-n5k2m\") pod \"keystone-operator-controller-manager-7c789f89c6-l9ztx\" (UID: \"baefa6a4-53d3-4158-a74f-87c9b766d760\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.252788 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp8v6\" (UniqueName: \"kubernetes.io/projected/0ce5a055-df90-4071-a5cf-f7361e01e5fe-kube-api-access-jp8v6\") pod \"heat-operator-controller-manager-cf99c678f-pnxcz\" (UID: \"0ce5a055-df90-4071-a5cf-f7361e01e5fe\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.252810 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlqt5\" (UniqueName: \"kubernetes.io/projected/3b477f52-57ee-4037-af3a-fa987453bdf2-kube-api-access-mlqt5\") pod \"horizon-operator-controller-manager-78bc7f9bd9-9wqmf\" (UID: \"3b477f52-57ee-4037-af3a-fa987453bdf2\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.252838 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvms4\" (UniqueName: \"kubernetes.io/projected/6bdda9de-4711-4fbc-b9d2-5f867691450a-kube-api-access-vvms4\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:32 crc kubenswrapper[4761]: E0307 08:08:32.253178 4761 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:32 crc kubenswrapper[4761]: E0307 08:08:32.253222 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert podName:6bdda9de-4711-4fbc-b9d2-5f867691450a nodeName:}" failed. No retries permitted until 2026-03-07 08:08:32.753203876 +0000 UTC m=+1169.662370351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert") pod "infra-operator-controller-manager-5995f4446f-zp8ch" (UID: "6bdda9de-4711-4fbc-b9d2-5f867691450a") : secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.271223 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.272227 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.304781 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qhgd\" (UniqueName: \"kubernetes.io/projected/9dcfc7f8-35e7-4fab-bb7a-c900caf10641-kube-api-access-7qhgd\") pod \"ironic-operator-controller-manager-545456dc4-5gtdw\" (UID: \"9dcfc7f8-35e7-4fab-bb7a-c900caf10641\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.305708 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-v2xvj" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.306865 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.320782 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.326301 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp8v6\" (UniqueName: \"kubernetes.io/projected/0ce5a055-df90-4071-a5cf-f7361e01e5fe-kube-api-access-jp8v6\") pod \"heat-operator-controller-manager-cf99c678f-pnxcz\" (UID: \"0ce5a055-df90-4071-a5cf-f7361e01e5fe\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.326538 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlqt5\" (UniqueName: \"kubernetes.io/projected/3b477f52-57ee-4037-af3a-fa987453bdf2-kube-api-access-mlqt5\") pod \"horizon-operator-controller-manager-78bc7f9bd9-9wqmf\" (UID: \"3b477f52-57ee-4037-af3a-fa987453bdf2\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.326767 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvms4\" (UniqueName: \"kubernetes.io/projected/6bdda9de-4711-4fbc-b9d2-5f867691450a-kube-api-access-vvms4\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.329050 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.337789 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.338803 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.343898 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pc44w" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.365220 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzbz4\" (UniqueName: \"kubernetes.io/projected/2db89b29-3889-4242-9ede-98140f3f8319-kube-api-access-wzbz4\") pod \"manila-operator-controller-manager-67d996989d-bh54b\" (UID: \"2db89b29-3889-4242-9ede-98140f3f8319\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.376966 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5k2m\" (UniqueName: \"kubernetes.io/projected/baefa6a4-53d3-4158-a74f-87c9b766d760-kube-api-access-n5k2m\") pod \"keystone-operator-controller-manager-7c789f89c6-l9ztx\" (UID: \"baefa6a4-53d3-4158-a74f-87c9b766d760\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.377044 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkpzc\" (UniqueName: \"kubernetes.io/projected/0febfb54-7188-4247-8d9b-2f166bf597ee-kube-api-access-xkpzc\") pod \"mariadb-operator-controller-manager-7b6bfb6475-c79kh\" (UID: \"0febfb54-7188-4247-8d9b-2f166bf597ee\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.406094 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.430488 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.449257 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5k2m\" (UniqueName: \"kubernetes.io/projected/baefa6a4-53d3-4158-a74f-87c9b766d760-kube-api-access-n5k2m\") pod \"keystone-operator-controller-manager-7c789f89c6-l9ztx\" (UID: \"baefa6a4-53d3-4158-a74f-87c9b766d760\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.479418 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.509314 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frvbx\" (UniqueName: \"kubernetes.io/projected/0bfdda94-7f9c-45d0-897f-0b65cf16e0fd-kube-api-access-frvbx\") pod \"neutron-operator-controller-manager-54688575f-lgkvz\" (UID: \"0bfdda94-7f9c-45d0-897f-0b65cf16e0fd\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.509455 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzbz4\" (UniqueName: \"kubernetes.io/projected/2db89b29-3889-4242-9ede-98140f3f8319-kube-api-access-wzbz4\") pod \"manila-operator-controller-manager-67d996989d-bh54b\" (UID: \"2db89b29-3889-4242-9ede-98140f3f8319\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.509622 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkpzc\" (UniqueName: \"kubernetes.io/projected/0febfb54-7188-4247-8d9b-2f166bf597ee-kube-api-access-xkpzc\") pod \"mariadb-operator-controller-manager-7b6bfb6475-c79kh\" (UID: \"0febfb54-7188-4247-8d9b-2f166bf597ee\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.540889 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-sz556" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.541322 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.542381 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.546651 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-c98xh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.570349 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkpzc\" (UniqueName: \"kubernetes.io/projected/0febfb54-7188-4247-8d9b-2f166bf597ee-kube-api-access-xkpzc\") pod \"mariadb-operator-controller-manager-7b6bfb6475-c79kh\" (UID: \"0febfb54-7188-4247-8d9b-2f166bf597ee\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.573914 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzbz4\" (UniqueName: \"kubernetes.io/projected/2db89b29-3889-4242-9ede-98140f3f8319-kube-api-access-wzbz4\") pod \"manila-operator-controller-manager-67d996989d-bh54b\" (UID: \"2db89b29-3889-4242-9ede-98140f3f8319\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.575090 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.585779 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.610457 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.614548 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.633927 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.634977 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.636754 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4m7d\" (UniqueName: \"kubernetes.io/projected/9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e-kube-api-access-g4m7d\") pod \"nova-operator-controller-manager-74b6b5dc96-45bp8\" (UID: \"9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.636871 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frvbx\" (UniqueName: \"kubernetes.io/projected/0bfdda94-7f9c-45d0-897f-0b65cf16e0fd-kube-api-access-frvbx\") pod \"neutron-operator-controller-manager-54688575f-lgkvz\" (UID: \"0bfdda94-7f9c-45d0-897f-0b65cf16e0fd\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.637034 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjzb9\" (UniqueName: \"kubernetes.io/projected/353016f5-6859-4193-9845-69bf540c7ab3-kube-api-access-rjzb9\") pod \"octavia-operator-controller-manager-5d86c7ddb7-h9xzz\" (UID: \"353016f5-6859-4193-9845-69bf540c7ab3\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.642591 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-gdl5z" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.683112 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frvbx\" (UniqueName: \"kubernetes.io/projected/0bfdda94-7f9c-45d0-897f-0b65cf16e0fd-kube-api-access-frvbx\") pod \"neutron-operator-controller-manager-54688575f-lgkvz\" (UID: \"0bfdda94-7f9c-45d0-897f-0b65cf16e0fd\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.698641 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.699801 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.703996 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.704977 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-k5qxq" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.713116 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.714275 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.720971 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.731300 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-v5zxj" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.738018 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4m7d\" (UniqueName: \"kubernetes.io/projected/9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e-kube-api-access-g4m7d\") pod \"nova-operator-controller-manager-74b6b5dc96-45bp8\" (UID: \"9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.738123 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-747pf\" (UniqueName: \"kubernetes.io/projected/0a9a2953-a51f-42b6-8ff8-d3f860ff6377-kube-api-access-747pf\") pod \"ovn-operator-controller-manager-75684d597f-cpn97\" (UID: \"0a9a2953-a51f-42b6-8ff8-d3f860ff6377\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.738185 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjzb9\" (UniqueName: \"kubernetes.io/projected/353016f5-6859-4193-9845-69bf540c7ab3-kube-api-access-rjzb9\") pod \"octavia-operator-controller-manager-5d86c7ddb7-h9xzz\" (UID: \"353016f5-6859-4193-9845-69bf540c7ab3\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.744970 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.749646 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.761835 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.771562 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4m7d\" (UniqueName: \"kubernetes.io/projected/9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e-kube-api-access-g4m7d\") pod \"nova-operator-controller-manager-74b6b5dc96-45bp8\" (UID: \"9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.778335 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.783686 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.788136 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-87b8c" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.792289 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjzb9\" (UniqueName: \"kubernetes.io/projected/353016f5-6859-4193-9845-69bf540c7ab3-kube-api-access-rjzb9\") pod \"octavia-operator-controller-manager-5d86c7ddb7-h9xzz\" (UID: \"353016f5-6859-4193-9845-69bf540c7ab3\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.796654 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.800608 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.802969 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-n4r2t" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.812444 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.843755 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.843823 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-747pf\" (UniqueName: \"kubernetes.io/projected/0a9a2953-a51f-42b6-8ff8-d3f860ff6377-kube-api-access-747pf\") pod \"ovn-operator-controller-manager-75684d597f-cpn97\" (UID: \"0a9a2953-a51f-42b6-8ff8-d3f860ff6377\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.843863 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4lls\" (UniqueName: \"kubernetes.io/projected/6540426d-eaf7-4f8f-ab46-8305c545e1cb-kube-api-access-w4lls\") pod \"placement-operator-controller-manager-648564c9fc-xqhz5\" (UID: \"6540426d-eaf7-4f8f-ab46-8305c545e1cb\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.843886 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.844374 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svd9j\" (UniqueName: \"kubernetes.io/projected/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-kube-api-access-svd9j\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:32 crc kubenswrapper[4761]: E0307 08:08:32.844527 4761 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:32 crc kubenswrapper[4761]: E0307 08:08:32.844569 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert podName:6bdda9de-4711-4fbc-b9d2-5f867691450a nodeName:}" failed. No retries permitted until 2026-03-07 08:08:33.844553616 +0000 UTC m=+1170.753720091 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert") pod "infra-operator-controller-manager-5995f4446f-zp8ch" (UID: "6bdda9de-4711-4fbc-b9d2-5f867691450a") : secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.854457 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.857488 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.870069 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-747pf\" (UniqueName: \"kubernetes.io/projected/0a9a2953-a51f-42b6-8ff8-d3f860ff6377-kube-api-access-747pf\") pod \"ovn-operator-controller-manager-75684d597f-cpn97\" (UID: \"0a9a2953-a51f-42b6-8ff8-d3f860ff6377\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.875277 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.887046 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.889135 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.890511 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.892763 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-27v65" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.896258 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.908047 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.909596 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.914593 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-khdh9" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.923817 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.947915 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.948157 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj8zr\" (UniqueName: \"kubernetes.io/projected/bc92e2bf-a093-4327-a1cd-807a2d916864-kube-api-access-gj8zr\") pod \"swift-operator-controller-manager-9b9ff9f4d-spw5z\" (UID: \"bc92e2bf-a093-4327-a1cd-807a2d916864\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.948313 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svd9j\" (UniqueName: \"kubernetes.io/projected/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-kube-api-access-svd9j\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.948415 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9hm\" (UniqueName: \"kubernetes.io/projected/6c6a959e-39ee-46ae-9cc5-03fe72cedb7a-kube-api-access-8n9hm\") pod \"telemetry-operator-controller-manager-6ccb65d888-km2fj\" (UID: \"6c6a959e-39ee-46ae-9cc5-03fe72cedb7a\") " pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.948468 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4lls\" (UniqueName: \"kubernetes.io/projected/6540426d-eaf7-4f8f-ab46-8305c545e1cb-kube-api-access-w4lls\") pod \"placement-operator-controller-manager-648564c9fc-xqhz5\" (UID: \"6540426d-eaf7-4f8f-ab46-8305c545e1cb\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.948502 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:32 crc kubenswrapper[4761]: E0307 08:08:32.948777 4761 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:32 crc kubenswrapper[4761]: E0307 08:08:32.948884 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert podName:bd23eeaa-ed7e-45ea-9a40-613ac4e11120 nodeName:}" failed. No retries permitted until 2026-03-07 08:08:33.448857825 +0000 UTC m=+1170.358024310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" (UID: "bd23eeaa-ed7e-45ea-9a40-613ac4e11120") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.981009 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svd9j\" (UniqueName: \"kubernetes.io/projected/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-kube-api-access-svd9j\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.982314 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.990511 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.991591 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.994817 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4lls\" (UniqueName: \"kubernetes.io/projected/6540426d-eaf7-4f8f-ab46-8305c545e1cb-kube-api-access-w4lls\") pod \"placement-operator-controller-manager-648564c9fc-xqhz5\" (UID: \"6540426d-eaf7-4f8f-ab46-8305c545e1cb\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.997344 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.999407 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.999458 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-k6lrm" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.007377 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.012592 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.019871 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.021393 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.023356 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6zbd6" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.035026 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.049840 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n9hm\" (UniqueName: \"kubernetes.io/projected/6c6a959e-39ee-46ae-9cc5-03fe72cedb7a-kube-api-access-8n9hm\") pod \"telemetry-operator-controller-manager-6ccb65d888-km2fj\" (UID: \"6c6a959e-39ee-46ae-9cc5-03fe72cedb7a\") " pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.049936 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sk6v\" (UniqueName: \"kubernetes.io/projected/efa0b70d-ed5b-48ba-a601-bfc64689ed5a-kube-api-access-2sk6v\") pod \"watcher-operator-controller-manager-bccc79885-pg2pp\" (UID: \"efa0b70d-ed5b-48ba-a601-bfc64689ed5a\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.049985 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj8zr\" (UniqueName: \"kubernetes.io/projected/bc92e2bf-a093-4327-a1cd-807a2d916864-kube-api-access-gj8zr\") pod \"swift-operator-controller-manager-9b9ff9f4d-spw5z\" (UID: \"bc92e2bf-a093-4327-a1cd-807a2d916864\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.050034 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkjpq\" (UniqueName: \"kubernetes.io/projected/7d43dfb0-643f-4e45-8e27-42b96b2c5ff9-kube-api-access-dkjpq\") pod \"test-operator-controller-manager-55b5ff4dbb-njxxc\" (UID: \"7d43dfb0-643f-4e45-8e27-42b96b2c5ff9\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.082023 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj8zr\" (UniqueName: \"kubernetes.io/projected/bc92e2bf-a093-4327-a1cd-807a2d916864-kube-api-access-gj8zr\") pod \"swift-operator-controller-manager-9b9ff9f4d-spw5z\" (UID: \"bc92e2bf-a093-4327-a1cd-807a2d916864\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.089397 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n9hm\" (UniqueName: \"kubernetes.io/projected/6c6a959e-39ee-46ae-9cc5-03fe72cedb7a-kube-api-access-8n9hm\") pod \"telemetry-operator-controller-manager-6ccb65d888-km2fj\" (UID: \"6c6a959e-39ee-46ae-9cc5-03fe72cedb7a\") " pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.131197 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.165629 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.168211 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.168359 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sk6v\" (UniqueName: \"kubernetes.io/projected/efa0b70d-ed5b-48ba-a601-bfc64689ed5a-kube-api-access-2sk6v\") pod \"watcher-operator-controller-manager-bccc79885-pg2pp\" (UID: \"efa0b70d-ed5b-48ba-a601-bfc64689ed5a\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.170509 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtc5l\" (UniqueName: \"kubernetes.io/projected/ee7ca114-a92b-4ed8-99ec-5d5ab002dca0-kube-api-access-wtc5l\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6pvgm\" (UID: \"ee7ca114-a92b-4ed8-99ec-5d5ab002dca0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.170554 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkjpq\" (UniqueName: \"kubernetes.io/projected/7d43dfb0-643f-4e45-8e27-42b96b2c5ff9-kube-api-access-dkjpq\") pod \"test-operator-controller-manager-55b5ff4dbb-njxxc\" (UID: \"7d43dfb0-643f-4e45-8e27-42b96b2c5ff9\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.170612 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9k54\" (UniqueName: \"kubernetes.io/projected/6a6b6075-ec04-418f-ba28-09f11f19b78e-kube-api-access-d9k54\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.170636 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.171133 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.210103 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkjpq\" (UniqueName: \"kubernetes.io/projected/7d43dfb0-643f-4e45-8e27-42b96b2c5ff9-kube-api-access-dkjpq\") pod \"test-operator-controller-manager-55b5ff4dbb-njxxc\" (UID: \"7d43dfb0-643f-4e45-8e27-42b96b2c5ff9\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.218002 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sk6v\" (UniqueName: \"kubernetes.io/projected/efa0b70d-ed5b-48ba-a601-bfc64689ed5a-kube-api-access-2sk6v\") pod \"watcher-operator-controller-manager-bccc79885-pg2pp\" (UID: \"efa0b70d-ed5b-48ba-a601-bfc64689ed5a\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.241486 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.274226 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtc5l\" (UniqueName: \"kubernetes.io/projected/ee7ca114-a92b-4ed8-99ec-5d5ab002dca0-kube-api-access-wtc5l\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6pvgm\" (UID: \"ee7ca114-a92b-4ed8-99ec-5d5ab002dca0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.274925 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9k54\" (UniqueName: \"kubernetes.io/projected/6a6b6075-ec04-418f-ba28-09f11f19b78e-kube-api-access-d9k54\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.274973 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.275063 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.275206 4761 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.275263 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:33.775244981 +0000 UTC m=+1170.684411456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "webhook-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.275599 4761 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.275632 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:33.77562204 +0000 UTC m=+1170.684788515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "metrics-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.282312 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.297540 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9k54\" (UniqueName: \"kubernetes.io/projected/6a6b6075-ec04-418f-ba28-09f11f19b78e-kube-api-access-d9k54\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.298467 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtc5l\" (UniqueName: \"kubernetes.io/projected/ee7ca114-a92b-4ed8-99ec-5d5ab002dca0-kube-api-access-wtc5l\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6pvgm\" (UID: \"ee7ca114-a92b-4ed8-99ec-5d5ab002dca0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.346283 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.398746 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.433683 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.445756 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.484886 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.485477 4761 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.485594 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert podName:bd23eeaa-ed7e-45ea-9a40-613ac4e11120 nodeName:}" failed. No retries permitted until 2026-03-07 08:08:34.48557212 +0000 UTC m=+1171.394738595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" (UID: "bd23eeaa-ed7e-45ea-9a40-613ac4e11120") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.487628 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" event={"ID":"bf4af368-4dee-4a4a-8c43-fd7991ac3366","Type":"ContainerStarted","Data":"db3f9d40c0b9c226601f25b1e425142744974006e32f03b8cce670d0e20c49ac"} Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.490851 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" event={"ID":"90a2f442-aea1-44ac-bbb8-ba58c0969806","Type":"ContainerStarted","Data":"89e41d06802fea2034e16bae1cdd68f0d7ef5c4830e685ed94b8ee3655edc77e"} Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.492778 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" event={"ID":"9554e552-2329-4e93-835e-9dbcad7b7519","Type":"ContainerStarted","Data":"90fc31366eb2baaaaaa486fdd123d5d8db9558e837b93b46f87d20d0a018e77c"} Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.790903 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.790988 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.791123 4761 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.791165 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:34.791150358 +0000 UTC m=+1171.700316833 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "webhook-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.791442 4761 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.791470 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:34.791463145 +0000 UTC m=+1171.700629620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "metrics-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.844778 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.850771 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh"] Mar 07 08:08:33 crc kubenswrapper[4761]: W0307 08:08:33.876287 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dcfc7f8_35e7_4fab_bb7a_c900caf10641.slice/crio-923d95b756f75da230f18c81b89dd6abef19209a9576208b3545146cc1a4b665 WatchSource:0}: Error finding container 923d95b756f75da230f18c81b89dd6abef19209a9576208b3545146cc1a4b665: Status 404 returned error can't find the container with id 923d95b756f75da230f18c81b89dd6abef19209a9576208b3545146cc1a4b665 Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.878238 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.888267 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz"] Mar 07 08:08:33 crc kubenswrapper[4761]: W0307 08:08:33.889981 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ce5a055_df90_4071_a5cf_f7361e01e5fe.slice/crio-e119379208369363e2cc7f5e5230f1f76db687a529223e4eaeb1cb7ee4c23c26 WatchSource:0}: Error finding container e119379208369363e2cc7f5e5230f1f76db687a529223e4eaeb1cb7ee4c23c26: Status 404 returned error can't find the container with id e119379208369363e2cc7f5e5230f1f76db687a529223e4eaeb1cb7ee4c23c26 Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.892336 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.892467 4761 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.892528 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert podName:6bdda9de-4711-4fbc-b9d2-5f867691450a nodeName:}" failed. No retries permitted until 2026-03-07 08:08:35.892511294 +0000 UTC m=+1172.801677769 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert") pod "infra-operator-controller-manager-5995f4446f-zp8ch" (UID: "6bdda9de-4711-4fbc-b9d2-5f867691450a") : secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.469080 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz"] Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.502823 4761 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.503092 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert podName:bd23eeaa-ed7e-45ea-9a40-613ac4e11120 nodeName:}" failed. No retries permitted until 2026-03-07 08:08:36.503076105 +0000 UTC m=+1173.412242580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" (UID: "bd23eeaa-ed7e-45ea-9a40-613ac4e11120") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.503469 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.574770 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8"] Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.586378 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx"] Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.592774 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" event={"ID":"353016f5-6859-4193-9845-69bf540c7ab3","Type":"ContainerStarted","Data":"0103e21829c8c742357f8368b5efb1ba25f3e1ed12dc031ee287d557e8aefe29"} Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.597230 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" event={"ID":"9dcfc7f8-35e7-4fab-bb7a-c900caf10641","Type":"ContainerStarted","Data":"923d95b756f75da230f18c81b89dd6abef19209a9576208b3545146cc1a4b665"} Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.605466 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" event={"ID":"3b477f52-57ee-4037-af3a-fa987453bdf2","Type":"ContainerStarted","Data":"c4458851d483b675e081ceae52c06cafae9d23794c0c21fb7f39e00ccccb8de2"} Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.610473 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" event={"ID":"0ce5a055-df90-4071-a5cf-f7361e01e5fe","Type":"ContainerStarted","Data":"e119379208369363e2cc7f5e5230f1f76db687a529223e4eaeb1cb7ee4c23c26"} Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.617259 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97"] Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.622118 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" event={"ID":"a4bc9370-c64d-4e5e-a0bd-70297abb8c0d","Type":"ContainerStarted","Data":"6af3be19c53b49c84345e0faaf1299192101c6bce8dfb976aed82ba2bbfd0679"} Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.633162 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh"] Mar 07 08:08:34 crc kubenswrapper[4761]: W0307 08:08:34.640504 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaefa6a4_53d3_4158_a74f_87c9b766d760.slice/crio-41eed0dd20d9b58bd811641a5f3b65a58d84ed58d4745a204b72b06e22f0505a WatchSource:0}: Error finding container 41eed0dd20d9b58bd811641a5f3b65a58d84ed58d4745a204b72b06e22f0505a: Status 404 returned error can't find the container with id 41eed0dd20d9b58bd811641a5f3b65a58d84ed58d4745a204b72b06e22f0505a Mar 07 08:08:34 crc kubenswrapper[4761]: W0307 08:08:34.644565 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dc4ecc0_cd44_4cb7_a942_2f0249c9e60e.slice/crio-15d70d929bd0bb6af0d8cee51f0ccdfe62877b5100fd870d62012bf1a7143930 WatchSource:0}: Error finding container 15d70d929bd0bb6af0d8cee51f0ccdfe62877b5100fd870d62012bf1a7143930: Status 404 returned error can't find the container with id 15d70d929bd0bb6af0d8cee51f0ccdfe62877b5100fd870d62012bf1a7143930 Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.652438 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-bh54b"] Mar 07 08:08:34 crc kubenswrapper[4761]: W0307 08:08:34.657053 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2db89b29_3889_4242_9ede_98140f3f8319.slice/crio-7ba9e76956593285ebe1b4b3035e90e124c2518ab0ff5f7eadf8e3cb7b097ccf WatchSource:0}: Error finding container 7ba9e76956593285ebe1b4b3035e90e124c2518ab0ff5f7eadf8e3cb7b097ccf: Status 404 returned error can't find the container with id 7ba9e76956593285ebe1b4b3035e90e124c2518ab0ff5f7eadf8e3cb7b097ccf Mar 07 08:08:34 crc kubenswrapper[4761]: W0307 08:08:34.670333 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bfdda94_7f9c_45d0_897f_0b65cf16e0fd.slice/crio-8799c183f657b87abc4311d9fdd84e51fb63b07cb7c39dc2680b9018db4fc4f0 WatchSource:0}: Error finding container 8799c183f657b87abc4311d9fdd84e51fb63b07cb7c39dc2680b9018db4fc4f0: Status 404 returned error can't find the container with id 8799c183f657b87abc4311d9fdd84e51fb63b07cb7c39dc2680b9018db4fc4f0 Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.671525 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz"] Mar 07 08:08:34 crc kubenswrapper[4761]: W0307 08:08:34.739425 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee7ca114_a92b_4ed8_99ec_5d5ab002dca0.slice/crio-94a36a80241043d0b0346ea0906b8fbeeeb9c3d2a71cd0e55a013eca646709a3 WatchSource:0}: Error finding container 94a36a80241043d0b0346ea0906b8fbeeeb9c3d2a71cd0e55a013eca646709a3: Status 404 returned error can't find the container with id 94a36a80241043d0b0346ea0906b8fbeeeb9c3d2a71cd0e55a013eca646709a3 Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.739844 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm"] Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.760969 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp"] Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.774455 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc"] Mar 07 08:08:34 crc kubenswrapper[4761]: W0307 08:08:34.778778 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefa0b70d_ed5b_48ba_a601_bfc64689ed5a.slice/crio-214729b8d152b713b12ed487c90423eeb8fc5d01c99c325e0e5592d3162c585d WatchSource:0}: Error finding container 214729b8d152b713b12ed487c90423eeb8fc5d01c99c325e0e5592d3162c585d: Status 404 returned error can't find the container with id 214729b8d152b713b12ed487c90423eeb8fc5d01c99c325e0e5592d3162c585d Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.781625 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj"] Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.795423 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.180:5001/openstack-k8s-operators/telemetry-operator:1a1a9a719889b8cdda26cbd675f0005643a8f9f2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8n9hm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6ccb65d888-km2fj_openstack-operators(6c6a959e-39ee-46ae-9cc5-03fe72cedb7a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.797523 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podUID="6c6a959e-39ee-46ae-9cc5-03fe72cedb7a" Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.808451 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.808549 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.808685 4761 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.808701 4761 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.808750 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:36.808734454 +0000 UTC m=+1173.717900929 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "webhook-server-cert" not found Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.808792 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:36.808769915 +0000 UTC m=+1173.717936450 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "metrics-server-cert" not found Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.932389 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5"] Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.967867 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z"] Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.975578 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4lls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-648564c9fc-xqhz5_openstack-operators(6540426d-eaf7-4f8f-ab46-8305c545e1cb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.976860 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" podUID="6540426d-eaf7-4f8f-ab46-8305c545e1cb" Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.636687 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" event={"ID":"9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e","Type":"ContainerStarted","Data":"15d70d929bd0bb6af0d8cee51f0ccdfe62877b5100fd870d62012bf1a7143930"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.648966 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" event={"ID":"efa0b70d-ed5b-48ba-a601-bfc64689ed5a","Type":"ContainerStarted","Data":"214729b8d152b713b12ed487c90423eeb8fc5d01c99c325e0e5592d3162c585d"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.661176 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" event={"ID":"6c6a959e-39ee-46ae-9cc5-03fe72cedb7a","Type":"ContainerStarted","Data":"71cea6d5d4d0813de663f30c45326a67f9dbf2a213828ad9f2944c6321b6f499"} Mar 07 08:08:35 crc kubenswrapper[4761]: E0307 08:08:35.665409 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.180:5001/openstack-k8s-operators/telemetry-operator:1a1a9a719889b8cdda26cbd675f0005643a8f9f2\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podUID="6c6a959e-39ee-46ae-9cc5-03fe72cedb7a" Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.667538 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" event={"ID":"6540426d-eaf7-4f8f-ab46-8305c545e1cb","Type":"ContainerStarted","Data":"c7e56a70e29562c65709b0326bbc88ab520ac65787bf2a6670626caf1c77129d"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.669085 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" event={"ID":"7d43dfb0-643f-4e45-8e27-42b96b2c5ff9","Type":"ContainerStarted","Data":"41696e7fa6c101fe74d6c5d717ddf5082b3090f4056e42765ced2ef67e135e1d"} Mar 07 08:08:35 crc kubenswrapper[4761]: E0307 08:08:35.669266 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e\\\"\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" podUID="6540426d-eaf7-4f8f-ab46-8305c545e1cb" Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.670387 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" event={"ID":"ee7ca114-a92b-4ed8-99ec-5d5ab002dca0","Type":"ContainerStarted","Data":"94a36a80241043d0b0346ea0906b8fbeeeb9c3d2a71cd0e55a013eca646709a3"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.681420 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" event={"ID":"2db89b29-3889-4242-9ede-98140f3f8319","Type":"ContainerStarted","Data":"7ba9e76956593285ebe1b4b3035e90e124c2518ab0ff5f7eadf8e3cb7b097ccf"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.691157 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" event={"ID":"0febfb54-7188-4247-8d9b-2f166bf597ee","Type":"ContainerStarted","Data":"9945df17dcbe96409f2bbcac39e0b2f8acacc8192b0a9938e41c6b6143336738"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.723380 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" event={"ID":"bc92e2bf-a093-4327-a1cd-807a2d916864","Type":"ContainerStarted","Data":"063e0c7fbff87124c0702cb999f8be874fc9092a90a0bd318ee22db0c9a817e2"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.723426 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" event={"ID":"0a9a2953-a51f-42b6-8ff8-d3f860ff6377","Type":"ContainerStarted","Data":"8803b6fe7a4e79319ee2adb9d836fe0ddefd68faf4ef3fbb9d1297fe91d28583"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.723440 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" event={"ID":"baefa6a4-53d3-4158-a74f-87c9b766d760","Type":"ContainerStarted","Data":"41eed0dd20d9b58bd811641a5f3b65a58d84ed58d4745a204b72b06e22f0505a"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.723455 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" event={"ID":"0bfdda94-7f9c-45d0-897f-0b65cf16e0fd","Type":"ContainerStarted","Data":"8799c183f657b87abc4311d9fdd84e51fb63b07cb7c39dc2680b9018db4fc4f0"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.945228 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:35 crc kubenswrapper[4761]: E0307 08:08:35.945630 4761 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:35 crc kubenswrapper[4761]: E0307 08:08:35.945710 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert podName:6bdda9de-4711-4fbc-b9d2-5f867691450a nodeName:}" failed. No retries permitted until 2026-03-07 08:08:39.945692146 +0000 UTC m=+1176.854858621 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert") pod "infra-operator-controller-manager-5995f4446f-zp8ch" (UID: "6bdda9de-4711-4fbc-b9d2-5f867691450a") : secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:36 crc kubenswrapper[4761]: I0307 08:08:36.559772 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.560257 4761 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.560304 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert podName:bd23eeaa-ed7e-45ea-9a40-613ac4e11120 nodeName:}" failed. No retries permitted until 2026-03-07 08:08:40.560289325 +0000 UTC m=+1177.469455790 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" (UID: "bd23eeaa-ed7e-45ea-9a40-613ac4e11120") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.727748 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.180:5001/openstack-k8s-operators/telemetry-operator:1a1a9a719889b8cdda26cbd675f0005643a8f9f2\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podUID="6c6a959e-39ee-46ae-9cc5-03fe72cedb7a" Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.727974 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e\\\"\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" podUID="6540426d-eaf7-4f8f-ab46-8305c545e1cb" Mar 07 08:08:36 crc kubenswrapper[4761]: I0307 08:08:36.867880 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:36 crc kubenswrapper[4761]: I0307 08:08:36.868098 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.868592 4761 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.868645 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:40.868630939 +0000 UTC m=+1177.777797414 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "webhook-server-cert" not found Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.868597 4761 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.868736 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:40.868699371 +0000 UTC m=+1177.777865846 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "metrics-server-cert" not found Mar 07 08:08:39 crc kubenswrapper[4761]: I0307 08:08:39.949592 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:39 crc kubenswrapper[4761]: E0307 08:08:39.949777 4761 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:39 crc kubenswrapper[4761]: E0307 08:08:39.950191 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert podName:6bdda9de-4711-4fbc-b9d2-5f867691450a nodeName:}" failed. No retries permitted until 2026-03-07 08:08:47.95016967 +0000 UTC m=+1184.859336145 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert") pod "infra-operator-controller-manager-5995f4446f-zp8ch" (UID: "6bdda9de-4711-4fbc-b9d2-5f867691450a") : secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:40 crc kubenswrapper[4761]: I0307 08:08:40.560867 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:40 crc kubenswrapper[4761]: E0307 08:08:40.561159 4761 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:40 crc kubenswrapper[4761]: E0307 08:08:40.561304 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert podName:bd23eeaa-ed7e-45ea-9a40-613ac4e11120 nodeName:}" failed. No retries permitted until 2026-03-07 08:08:48.561265663 +0000 UTC m=+1185.470432178 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" (UID: "bd23eeaa-ed7e-45ea-9a40-613ac4e11120") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:40 crc kubenswrapper[4761]: I0307 08:08:40.968163 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:40 crc kubenswrapper[4761]: E0307 08:08:40.968353 4761 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 08:08:40 crc kubenswrapper[4761]: E0307 08:08:40.968449 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:48.968423993 +0000 UTC m=+1185.877590498 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "webhook-server-cert" not found Mar 07 08:08:40 crc kubenswrapper[4761]: I0307 08:08:40.968483 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:40 crc kubenswrapper[4761]: E0307 08:08:40.968686 4761 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 08:08:40 crc kubenswrapper[4761]: E0307 08:08:40.968766 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:48.968752491 +0000 UTC m=+1185.877918996 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "metrics-server-cert" not found Mar 07 08:08:44 crc kubenswrapper[4761]: I0307 08:08:44.038426 4761 scope.go:117] "RemoveContainer" containerID="09f4a34d389f4eecea1e2e246f771cea1437ac1408958e53146bc65495fe1ec0" Mar 07 08:08:48 crc kubenswrapper[4761]: I0307 08:08:48.023749 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:48 crc kubenswrapper[4761]: I0307 08:08:48.045746 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:48 crc kubenswrapper[4761]: I0307 08:08:48.275071 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:48 crc kubenswrapper[4761]: E0307 08:08:48.458873 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120" Mar 07 08:08:48 crc kubenswrapper[4761]: E0307 08:08:48.459125 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lnh67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-6db6876945-wvt5q_openstack-operators(bf4af368-4dee-4a4a-8c43-fd7991ac3366): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:48 crc kubenswrapper[4761]: E0307 08:08:48.460381 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" podUID="bf4af368-4dee-4a4a-8c43-fd7991ac3366" Mar 07 08:08:48 crc kubenswrapper[4761]: I0307 08:08:48.636292 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:48 crc kubenswrapper[4761]: I0307 08:08:48.660329 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:48 crc kubenswrapper[4761]: I0307 08:08:48.701044 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:48 crc kubenswrapper[4761]: E0307 08:08:48.853184 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" podUID="bf4af368-4dee-4a4a-8c43-fd7991ac3366" Mar 07 08:08:49 crc kubenswrapper[4761]: I0307 08:08:49.044324 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:49 crc kubenswrapper[4761]: I0307 08:08:49.044482 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.044503 4761 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.044584 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:09:05.044562993 +0000 UTC m=+1201.953729468 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "metrics-server-cert" not found Mar 07 08:08:49 crc kubenswrapper[4761]: I0307 08:08:49.049369 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.195385 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.195576 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dkjpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-55b5ff4dbb-njxxc_openstack-operators(7d43dfb0-643f-4e45-8e27-42b96b2c5ff9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.196822 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.786320 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.786766 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-747pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-75684d597f-cpn97_openstack-operators(0a9a2953-a51f-42b6-8ff8-d3f860ff6377): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.787956 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podUID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.865738 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podUID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.865810 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" Mar 07 08:08:50 crc kubenswrapper[4761]: E0307 08:08:50.289035 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214" Mar 07 08:08:50 crc kubenswrapper[4761]: E0307 08:08:50.289533 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8lvwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-5d87c9d997-mxh22_openstack-operators(90a2f442-aea1-44ac-bbb8-ba58c0969806): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:50 crc kubenswrapper[4761]: E0307 08:08:50.290735 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" Mar 07 08:08:50 crc kubenswrapper[4761]: E0307 08:08:50.872150 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214\\\"\"" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" Mar 07 08:08:53 crc kubenswrapper[4761]: E0307 08:08:53.937007 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97" Mar 07 08:08:53 crc kubenswrapper[4761]: E0307 08:08:53.938366 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2sk6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-pg2pp_openstack-operators(efa0b70d-ed5b-48ba-a601-bfc64689ed5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:53 crc kubenswrapper[4761]: E0307 08:08:53.939671 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" podUID="efa0b70d-ed5b-48ba-a601-bfc64689ed5a" Mar 07 08:08:54 crc kubenswrapper[4761]: E0307 08:08:54.427807 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053" Mar 07 08:08:54 crc kubenswrapper[4761]: E0307 08:08:54.428168 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jp8v6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-cf99c678f-pnxcz_openstack-operators(0ce5a055-df90-4071-a5cf-f7361e01e5fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:54 crc kubenswrapper[4761]: E0307 08:08:54.429797 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" podUID="0ce5a055-df90-4071-a5cf-f7361e01e5fe" Mar 07 08:08:54 crc kubenswrapper[4761]: E0307 08:08:54.918886 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" podUID="efa0b70d-ed5b-48ba-a601-bfc64689ed5a" Mar 07 08:08:54 crc kubenswrapper[4761]: E0307 08:08:54.918206 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053\\\"\"" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" podUID="0ce5a055-df90-4071-a5cf-f7361e01e5fe" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.340407 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.340972 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g4m7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-74b6b5dc96-45bp8_openstack-operators(9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.342361 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" podUID="9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.847785 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.848071 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gj8zr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9b9ff9f4d-spw5z_openstack-operators(bc92e2bf-a093-4327-a1cd-807a2d916864): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.849286 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podUID="bc92e2bf-a093-4327-a1cd-807a2d916864" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.948068 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podUID="bc92e2bf-a093-4327-a1cd-807a2d916864" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.948992 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84\\\"\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" podUID="9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e" Mar 07 08:08:58 crc kubenswrapper[4761]: E0307 08:08:58.430477 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4" Mar 07 08:08:58 crc kubenswrapper[4761]: E0307 08:08:58.431419 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frvbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54688575f-lgkvz_openstack-operators(0bfdda94-7f9c-45d0-897f-0b65cf16e0fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:58 crc kubenswrapper[4761]: E0307 08:08:58.432592 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" podUID="0bfdda94-7f9c-45d0-897f-0b65cf16e0fd" Mar 07 08:08:58 crc kubenswrapper[4761]: E0307 08:08:58.956541 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" podUID="0bfdda94-7f9c-45d0-897f-0b65cf16e0fd" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.439494 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.440440 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mlqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-78bc7f9bd9-9wqmf_openstack-operators(3b477f52-57ee-4037-af3a-fa987453bdf2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.442078 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.898641 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.898839 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wtc5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-6pvgm_openstack-operators(ee7ca114-a92b-4ed8-99ec-5d5ab002dca0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.900845 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" podUID="ee7ca114-a92b-4ed8-99ec-5d5ab002dca0" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.985831 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.985970 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" podUID="ee7ca114-a92b-4ed8-99ec-5d5ab002dca0" Mar 07 08:09:01 crc kubenswrapper[4761]: E0307 08:09:01.986206 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c" Mar 07 08:09:01 crc kubenswrapper[4761]: E0307 08:09:01.986412 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n5k2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7c789f89c6-l9ztx_openstack-operators(baefa6a4-53d3-4158-a74f-87c9b766d760): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:09:01 crc kubenswrapper[4761]: E0307 08:09:01.987943 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" podUID="baefa6a4-53d3-4158-a74f-87c9b766d760" Mar 07 08:09:02 crc kubenswrapper[4761]: I0307 08:09:02.996541 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch"] Mar 07 08:09:03 crc kubenswrapper[4761]: I0307 08:09:03.009051 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" event={"ID":"9554e552-2329-4e93-835e-9dbcad7b7519","Type":"ContainerStarted","Data":"b33455865e0f6d3acf34e038cf3f9f49a044be6b3b34193f942ecee0a6bca401"} Mar 07 08:09:03 crc kubenswrapper[4761]: I0307 08:09:03.009891 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" Mar 07 08:09:03 crc kubenswrapper[4761]: I0307 08:09:03.019358 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" event={"ID":"353016f5-6859-4193-9845-69bf540c7ab3","Type":"ContainerStarted","Data":"9529675e209c306d435007b407e16ef496a81a344295e555eb7a95c23cc1f4d7"} Mar 07 08:09:03 crc kubenswrapper[4761]: I0307 08:09:03.019475 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 08:09:03 crc kubenswrapper[4761]: E0307 08:09:03.024534 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" podUID="baefa6a4-53d3-4158-a74f-87c9b766d760" Mar 07 08:09:03 crc kubenswrapper[4761]: I0307 08:09:03.043707 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" podStartSLOduration=7.670401268 podStartE2EDuration="32.043617576s" podCreationTimestamp="2026-03-07 08:08:31 +0000 UTC" firstStartedPulling="2026-03-07 08:08:33.4626441 +0000 UTC m=+1170.371810565" lastFinishedPulling="2026-03-07 08:08:57.835860398 +0000 UTC m=+1194.745026873" observedRunningTime="2026-03-07 08:09:03.033627071 +0000 UTC m=+1199.942793547" watchObservedRunningTime="2026-03-07 08:09:03.043617576 +0000 UTC m=+1199.952784051" Mar 07 08:09:03 crc kubenswrapper[4761]: I0307 08:09:03.087638 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" podStartSLOduration=4.192967819 podStartE2EDuration="31.087614081s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.563303186 +0000 UTC m=+1171.472469661" lastFinishedPulling="2026-03-07 08:09:01.457949448 +0000 UTC m=+1198.367115923" observedRunningTime="2026-03-07 08:09:03.075673039 +0000 UTC m=+1199.984839514" watchObservedRunningTime="2026-03-07 08:09:03.087614081 +0000 UTC m=+1199.996780556" Mar 07 08:09:03 crc kubenswrapper[4761]: I0307 08:09:03.124953 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w"] Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.035447 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" event={"ID":"9dcfc7f8-35e7-4fab-bb7a-c900caf10641","Type":"ContainerStarted","Data":"41d06dd5c60f0f837fe1ccdb037076ad6bdd8e6892d4a4275ed07e5197267ea1"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.037058 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.056872 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" event={"ID":"6bdda9de-4711-4fbc-b9d2-5f867691450a","Type":"ContainerStarted","Data":"a3ecb2c5ad509281edbc7d21bd0c3ea1bd14d31cf1439d4a40a740e8fcda9259"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.067035 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" event={"ID":"bf4af368-4dee-4a4a-8c43-fd7991ac3366","Type":"ContainerStarted","Data":"308c66f0f8ac1094f2ea1c132459e50a50223ab81c311c156411654064d3d522"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.068182 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.074524 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" event={"ID":"6c6a959e-39ee-46ae-9cc5-03fe72cedb7a","Type":"ContainerStarted","Data":"155455edc7fe1a586e85847581a1a7a07befacc0a04e2089528839e6958706cc"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.074909 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.076477 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" event={"ID":"6540426d-eaf7-4f8f-ab46-8305c545e1cb","Type":"ContainerStarted","Data":"3fd3dba34c55d338d0187d94c56899dc16d0e60a2ba4b6763712dadfeb6688ec"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.076993 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.078966 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" event={"ID":"a4bc9370-c64d-4e5e-a0bd-70297abb8c0d","Type":"ContainerStarted","Data":"b9cd9bfba41b5d6d90b930c8b19de9899b287c8b027ef3788b2d6083a051a8b4"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.079617 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.082197 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" event={"ID":"2db89b29-3889-4242-9ede-98140f3f8319","Type":"ContainerStarted","Data":"176b93f48fd4e95dc46217e4461d56b48c71dd75f37f939290d39f8488266098"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.082867 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.084595 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" podStartSLOduration=4.50238543 podStartE2EDuration="32.084575522s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:33.879958878 +0000 UTC m=+1170.789125353" lastFinishedPulling="2026-03-07 08:09:01.46214897 +0000 UTC m=+1198.371315445" observedRunningTime="2026-03-07 08:09:04.070895228 +0000 UTC m=+1200.980061703" watchObservedRunningTime="2026-03-07 08:09:04.084575522 +0000 UTC m=+1200.993741997" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.087376 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" event={"ID":"0febfb54-7188-4247-8d9b-2f166bf597ee","Type":"ContainerStarted","Data":"4cdfb31dc409d7b321c810a425510e042a6a930609002a81bab54dc73c830f3c"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.087457 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.090421 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" event={"ID":"bd23eeaa-ed7e-45ea-9a40-613ac4e11120","Type":"ContainerStarted","Data":"dbbc8ff55202456bc3284ef9032330c033dd01b456e370a8d40f12be89e6a9aa"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.094668 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" podStartSLOduration=3.794806822 podStartE2EDuration="33.094650028s" podCreationTimestamp="2026-03-07 08:08:31 +0000 UTC" firstStartedPulling="2026-03-07 08:08:33.23265267 +0000 UTC m=+1170.141819145" lastFinishedPulling="2026-03-07 08:09:02.532495876 +0000 UTC m=+1199.441662351" observedRunningTime="2026-03-07 08:09:04.087056883 +0000 UTC m=+1200.996223378" watchObservedRunningTime="2026-03-07 08:09:04.094650028 +0000 UTC m=+1201.003816503" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.107348 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" podStartSLOduration=5.4933825 podStartE2EDuration="33.107327988s" podCreationTimestamp="2026-03-07 08:08:31 +0000 UTC" firstStartedPulling="2026-03-07 08:08:33.848785896 +0000 UTC m=+1170.757952371" lastFinishedPulling="2026-03-07 08:09:01.462731384 +0000 UTC m=+1198.371897859" observedRunningTime="2026-03-07 08:09:04.098499562 +0000 UTC m=+1201.007666037" watchObservedRunningTime="2026-03-07 08:09:04.107327988 +0000 UTC m=+1201.016494463" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.156367 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" podStartSLOduration=4.547653316 podStartE2EDuration="32.156344946s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.975323745 +0000 UTC m=+1171.884490220" lastFinishedPulling="2026-03-07 08:09:02.584015375 +0000 UTC m=+1199.493181850" observedRunningTime="2026-03-07 08:09:04.147891739 +0000 UTC m=+1201.057058224" watchObservedRunningTime="2026-03-07 08:09:04.156344946 +0000 UTC m=+1201.065511431" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.188002 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" podStartSLOduration=5.39737654 podStartE2EDuration="32.187983669s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.670430604 +0000 UTC m=+1171.579597079" lastFinishedPulling="2026-03-07 08:09:01.461037733 +0000 UTC m=+1198.370204208" observedRunningTime="2026-03-07 08:09:04.1810723 +0000 UTC m=+1201.090238765" watchObservedRunningTime="2026-03-07 08:09:04.187983669 +0000 UTC m=+1201.097150144" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.188696 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podStartSLOduration=4.273471825 podStartE2EDuration="32.188688286s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.795293815 +0000 UTC m=+1171.704460290" lastFinishedPulling="2026-03-07 08:09:02.710510276 +0000 UTC m=+1199.619676751" observedRunningTime="2026-03-07 08:09:04.162779813 +0000 UTC m=+1201.071946298" watchObservedRunningTime="2026-03-07 08:09:04.188688286 +0000 UTC m=+1201.097854761" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.200847 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" podStartSLOduration=5.377213907 podStartE2EDuration="32.200838433s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.636741751 +0000 UTC m=+1171.545908226" lastFinishedPulling="2026-03-07 08:09:01.460366277 +0000 UTC m=+1198.369532752" observedRunningTime="2026-03-07 08:09:04.197096041 +0000 UTC m=+1201.106262536" watchObservedRunningTime="2026-03-07 08:09:04.200838433 +0000 UTC m=+1201.110004898" Mar 07 08:09:05 crc kubenswrapper[4761]: I0307 08:09:05.088036 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:09:05 crc kubenswrapper[4761]: I0307 08:09:05.093057 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:09:05 crc kubenswrapper[4761]: I0307 08:09:05.186291 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-k6lrm" Mar 07 08:09:05 crc kubenswrapper[4761]: I0307 08:09:05.193606 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:09:05 crc kubenswrapper[4761]: I0307 08:09:05.645457 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc"] Mar 07 08:09:06 crc kubenswrapper[4761]: W0307 08:09:06.295756 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a6b6075_ec04_418f_ba28_09f11f19b78e.slice/crio-ac45a55eafcf2d94b3e10eaf361ac5481de128b9593c3c8cee46cd094d22daac WatchSource:0}: Error finding container ac45a55eafcf2d94b3e10eaf361ac5481de128b9593c3c8cee46cd094d22daac: Status 404 returned error can't find the container with id ac45a55eafcf2d94b3e10eaf361ac5481de128b9593c3c8cee46cd094d22daac Mar 07 08:09:07 crc kubenswrapper[4761]: I0307 08:09:07.120782 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" event={"ID":"6a6b6075-ec04-418f-ba28-09f11f19b78e","Type":"ContainerStarted","Data":"ac45a55eafcf2d94b3e10eaf361ac5481de128b9593c3c8cee46cd094d22daac"} Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.131360 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" event={"ID":"6bdda9de-4711-4fbc-b9d2-5f867691450a","Type":"ContainerStarted","Data":"68e4bf0abd2580e4b2c406474ab426363a8190b78a6672e3d1869d9c259b8a51"} Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.131681 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.133691 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" event={"ID":"6a6b6075-ec04-418f-ba28-09f11f19b78e","Type":"ContainerStarted","Data":"e7a31476ed16910c418acc007c2aa3c105b4cbff3c4f9bf9818a1f397f67cf49"} Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.133877 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.142406 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" event={"ID":"0a9a2953-a51f-42b6-8ff8-d3f860ff6377","Type":"ContainerStarted","Data":"fecc6f8d9fdaa78ed2073c88875eb1bbd5196a4b0d106a15e79e884489484194"} Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.145453 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.148469 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" event={"ID":"bd23eeaa-ed7e-45ea-9a40-613ac4e11120","Type":"ContainerStarted","Data":"eecc7c748f3930aebdf862e90525d51e501de54c73b7adcd3a979cf5e63c2b7d"} Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.149477 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.150165 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" event={"ID":"90a2f442-aea1-44ac-bbb8-ba58c0969806","Type":"ContainerStarted","Data":"b587059742bc05264204142dc6e664c7e30176cfb85d713077ea31e1ee3d15eb"} Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.150431 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.151707 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" event={"ID":"7d43dfb0-643f-4e45-8e27-42b96b2c5ff9","Type":"ContainerStarted","Data":"c71116e6a3a9f6c29260500b75d3ed61bd2dbad7aa34d0bdfa582d386bd356a0"} Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.152152 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.174726 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" podStartSLOduration=33.347570449 podStartE2EDuration="37.174691259s" podCreationTimestamp="2026-03-07 08:08:31 +0000 UTC" firstStartedPulling="2026-03-07 08:09:03.024465708 +0000 UTC m=+1199.933632183" lastFinishedPulling="2026-03-07 08:09:06.851586478 +0000 UTC m=+1203.760752993" observedRunningTime="2026-03-07 08:09:08.161627229 +0000 UTC m=+1205.070793724" watchObservedRunningTime="2026-03-07 08:09:08.174691259 +0000 UTC m=+1205.083857744" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.182123 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podStartSLOduration=3.78079205 podStartE2EDuration="37.18210501s" podCreationTimestamp="2026-03-07 08:08:31 +0000 UTC" firstStartedPulling="2026-03-07 08:08:33.449221912 +0000 UTC m=+1170.358388387" lastFinishedPulling="2026-03-07 08:09:06.850534872 +0000 UTC m=+1203.759701347" observedRunningTime="2026-03-07 08:09:08.18170604 +0000 UTC m=+1205.090872535" watchObservedRunningTime="2026-03-07 08:09:08.18210501 +0000 UTC m=+1205.091271485" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.214600 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" podStartSLOduration=32.536473525 podStartE2EDuration="36.214587004s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:09:03.16821958 +0000 UTC m=+1200.077386055" lastFinishedPulling="2026-03-07 08:09:06.846333059 +0000 UTC m=+1203.755499534" observedRunningTime="2026-03-07 08:09:08.213276262 +0000 UTC m=+1205.122442747" watchObservedRunningTime="2026-03-07 08:09:08.214587004 +0000 UTC m=+1205.123753479" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.243267 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podStartSLOduration=4.054423503 podStartE2EDuration="36.243246564s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.639363245 +0000 UTC m=+1171.548529720" lastFinishedPulling="2026-03-07 08:09:06.828186276 +0000 UTC m=+1203.737352781" observedRunningTime="2026-03-07 08:09:08.240266011 +0000 UTC m=+1205.149432486" watchObservedRunningTime="2026-03-07 08:09:08.243246564 +0000 UTC m=+1205.152413049" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.274741 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podStartSLOduration=36.274701223 podStartE2EDuration="36.274701223s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:09:08.267553868 +0000 UTC m=+1205.176720343" watchObservedRunningTime="2026-03-07 08:09:08.274701223 +0000 UTC m=+1205.183867698" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.294412 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podStartSLOduration=4.206410507 podStartE2EDuration="36.294396544s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.748382059 +0000 UTC m=+1171.657548534" lastFinishedPulling="2026-03-07 08:09:06.836368096 +0000 UTC m=+1203.745534571" observedRunningTime="2026-03-07 08:09:08.286301656 +0000 UTC m=+1205.195468131" watchObservedRunningTime="2026-03-07 08:09:08.294396544 +0000 UTC m=+1205.203563019" Mar 07 08:09:10 crc kubenswrapper[4761]: I0307 08:09:10.168101 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" event={"ID":"efa0b70d-ed5b-48ba-a601-bfc64689ed5a","Type":"ContainerStarted","Data":"2ee856bdbeafaef32c4dd797a7d3f840ebe863d387b3b31e1fe794a09cd10052"} Mar 07 08:09:10 crc kubenswrapper[4761]: I0307 08:09:10.168657 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" Mar 07 08:09:10 crc kubenswrapper[4761]: I0307 08:09:10.202015 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" podStartSLOduration=3.884483532 podStartE2EDuration="38.201992879s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.783321703 +0000 UTC m=+1171.692488178" lastFinishedPulling="2026-03-07 08:09:09.10083102 +0000 UTC m=+1206.009997525" observedRunningTime="2026-03-07 08:09:10.196552106 +0000 UTC m=+1207.105718581" watchObservedRunningTime="2026-03-07 08:09:10.201992879 +0000 UTC m=+1207.111159384" Mar 07 08:09:11 crc kubenswrapper[4761]: I0307 08:09:11.177415 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" event={"ID":"0ce5a055-df90-4071-a5cf-f7361e01e5fe","Type":"ContainerStarted","Data":"a4a060c5792fba1d01323c32b54ee9777e8c17a5d0136180a80a26968b64713b"} Mar 07 08:09:11 crc kubenswrapper[4761]: I0307 08:09:11.177691 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 08:09:11 crc kubenswrapper[4761]: I0307 08:09:11.180401 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" event={"ID":"bc92e2bf-a093-4327-a1cd-807a2d916864","Type":"ContainerStarted","Data":"9955545878d1e6a050c51140633e58ac2f636cfa6ce5788d24994fb8bada2132"} Mar 07 08:09:11 crc kubenswrapper[4761]: I0307 08:09:11.181122 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 08:09:11 crc kubenswrapper[4761]: I0307 08:09:11.206820 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" podStartSLOduration=3.972640257 podStartE2EDuration="40.206792721s" podCreationTimestamp="2026-03-07 08:08:31 +0000 UTC" firstStartedPulling="2026-03-07 08:08:33.892492044 +0000 UTC m=+1170.801658519" lastFinishedPulling="2026-03-07 08:09:10.126644508 +0000 UTC m=+1207.035810983" observedRunningTime="2026-03-07 08:09:11.195575657 +0000 UTC m=+1208.104742172" watchObservedRunningTime="2026-03-07 08:09:11.206792721 +0000 UTC m=+1208.115959236" Mar 07 08:09:11 crc kubenswrapper[4761]: I0307 08:09:11.226592 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podStartSLOduration=3.7720669940000002 podStartE2EDuration="39.226569065s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.974957226 +0000 UTC m=+1171.884123701" lastFinishedPulling="2026-03-07 08:09:10.429459287 +0000 UTC m=+1207.338625772" observedRunningTime="2026-03-07 08:09:11.217428031 +0000 UTC m=+1208.126594506" watchObservedRunningTime="2026-03-07 08:09:11.226569065 +0000 UTC m=+1208.135735550" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.223583 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.253227 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.254105 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.325478 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.415256 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.752760 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.857056 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.951322 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.010465 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.134778 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.173983 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.209699 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" event={"ID":"0bfdda94-7f9c-45d0-897f-0b65cf16e0fd","Type":"ContainerStarted","Data":"7a00a99d23bb211019e154c1e72bda4dd47906c6798accb6dc115db1c493b1ee"} Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.210099 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.213565 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" event={"ID":"9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e","Type":"ContainerStarted","Data":"372765249b68815b4ab28701485402ebab9e58d438ea057d02618ffbc90dceda"} Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.214097 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.237268 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" podStartSLOduration=3.7638540430000003 podStartE2EDuration="41.237250898s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.673616182 +0000 UTC m=+1171.582782657" lastFinishedPulling="2026-03-07 08:09:12.147012997 +0000 UTC m=+1209.056179512" observedRunningTime="2026-03-07 08:09:13.230918384 +0000 UTC m=+1210.140084869" watchObservedRunningTime="2026-03-07 08:09:13.237250898 +0000 UTC m=+1210.146417373" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.252442 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" podStartSLOduration=3.683957871 podStartE2EDuration="41.252427689s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.6583799 +0000 UTC m=+1171.567546375" lastFinishedPulling="2026-03-07 08:09:12.226849708 +0000 UTC m=+1209.136016193" observedRunningTime="2026-03-07 08:09:13.246505674 +0000 UTC m=+1210.155672189" watchObservedRunningTime="2026-03-07 08:09:13.252427689 +0000 UTC m=+1210.161594164" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.286267 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 08:09:15 crc kubenswrapper[4761]: I0307 08:09:15.199837 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:09:15 crc kubenswrapper[4761]: I0307 08:09:15.234533 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" event={"ID":"3b477f52-57ee-4037-af3a-fa987453bdf2","Type":"ContainerStarted","Data":"43946f80c28ae24ac33173f9f6027101a295ec80bd8fbc2fd07350615b0ac177"} Mar 07 08:09:15 crc kubenswrapper[4761]: I0307 08:09:15.234785 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 08:09:15 crc kubenswrapper[4761]: I0307 08:09:15.265928 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podStartSLOduration=4.014973422 podStartE2EDuration="44.26590578s" podCreationTimestamp="2026-03-07 08:08:31 +0000 UTC" firstStartedPulling="2026-03-07 08:08:33.847513985 +0000 UTC m=+1170.756680470" lastFinishedPulling="2026-03-07 08:09:14.098446343 +0000 UTC m=+1211.007612828" observedRunningTime="2026-03-07 08:09:15.259488004 +0000 UTC m=+1212.168654479" watchObservedRunningTime="2026-03-07 08:09:15.26590578 +0000 UTC m=+1212.175072255" Mar 07 08:09:17 crc kubenswrapper[4761]: I0307 08:09:17.256393 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" event={"ID":"ee7ca114-a92b-4ed8-99ec-5d5ab002dca0","Type":"ContainerStarted","Data":"4e2e31d6faab8cd36a1412b234baa563e8944ec0d5304456d467ceaf7dea4b18"} Mar 07 08:09:17 crc kubenswrapper[4761]: I0307 08:09:17.282501 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" podStartSLOduration=3.895361717 podStartE2EDuration="45.282477678s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.743576362 +0000 UTC m=+1171.652742837" lastFinishedPulling="2026-03-07 08:09:16.130692323 +0000 UTC m=+1213.039858798" observedRunningTime="2026-03-07 08:09:17.276931353 +0000 UTC m=+1214.186097838" watchObservedRunningTime="2026-03-07 08:09:17.282477678 +0000 UTC m=+1214.191644153" Mar 07 08:09:18 crc kubenswrapper[4761]: I0307 08:09:18.284608 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:09:18 crc kubenswrapper[4761]: I0307 08:09:18.708507 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:09:19 crc kubenswrapper[4761]: I0307 08:09:19.279370 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" event={"ID":"baefa6a4-53d3-4158-a74f-87c9b766d760","Type":"ContainerStarted","Data":"e0507f93c9d1e40a2eeda093c3327cad70643971f8bb5cce7f0ed93e3a2fe3b2"} Mar 07 08:09:19 crc kubenswrapper[4761]: I0307 08:09:19.280040 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" Mar 07 08:09:19 crc kubenswrapper[4761]: I0307 08:09:19.310622 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" podStartSLOduration=3.816453948 podStartE2EDuration="47.310594087s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.650775594 +0000 UTC m=+1171.559942069" lastFinishedPulling="2026-03-07 08:09:18.144915733 +0000 UTC m=+1215.054082208" observedRunningTime="2026-03-07 08:09:19.301221258 +0000 UTC m=+1216.210387733" watchObservedRunningTime="2026-03-07 08:09:19.310594087 +0000 UTC m=+1216.219760602" Mar 07 08:09:22 crc kubenswrapper[4761]: I0307 08:09:22.332507 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 08:09:22 crc kubenswrapper[4761]: I0307 08:09:22.652289 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 08:09:22 crc kubenswrapper[4761]: I0307 08:09:22.894790 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 08:09:22 crc kubenswrapper[4761]: I0307 08:09:22.985171 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 08:09:23 crc kubenswrapper[4761]: I0307 08:09:23.171468 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 08:09:23 crc kubenswrapper[4761]: I0307 08:09:23.350763 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" Mar 07 08:09:32 crc kubenswrapper[4761]: I0307 08:09:32.739394 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.207162 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q29nq"] Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.209431 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.213182 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.213275 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.213384 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.213818 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-xg856" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.232671 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q29nq"] Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.287294 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7cdvj"] Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.289148 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.291035 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.302498 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7cdvj"] Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.336375 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.336423 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-config\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.336475 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxld6\" (UniqueName: \"kubernetes.io/projected/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-kube-api-access-mxld6\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.336495 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab01a96c-cf26-461f-b358-3ab6603ac44b-config\") pod \"dnsmasq-dns-675f4bcbfc-q29nq\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.336566 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6qk9\" (UniqueName: \"kubernetes.io/projected/ab01a96c-cf26-461f-b358-3ab6603ac44b-kube-api-access-c6qk9\") pod \"dnsmasq-dns-675f4bcbfc-q29nq\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.438033 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6qk9\" (UniqueName: \"kubernetes.io/projected/ab01a96c-cf26-461f-b358-3ab6603ac44b-kube-api-access-c6qk9\") pod \"dnsmasq-dns-675f4bcbfc-q29nq\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.438305 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.438408 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-config\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.438531 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxld6\" (UniqueName: \"kubernetes.io/projected/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-kube-api-access-mxld6\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.438616 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab01a96c-cf26-461f-b358-3ab6603ac44b-config\") pod \"dnsmasq-dns-675f4bcbfc-q29nq\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.439181 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-config\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.439203 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.439517 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab01a96c-cf26-461f-b358-3ab6603ac44b-config\") pod \"dnsmasq-dns-675f4bcbfc-q29nq\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.465746 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6qk9\" (UniqueName: \"kubernetes.io/projected/ab01a96c-cf26-461f-b358-3ab6603ac44b-kube-api-access-c6qk9\") pod \"dnsmasq-dns-675f4bcbfc-q29nq\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.466080 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxld6\" (UniqueName: \"kubernetes.io/projected/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-kube-api-access-mxld6\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.534368 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.603070 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.995329 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q29nq"] Mar 07 08:09:51 crc kubenswrapper[4761]: I0307 08:09:51.100480 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7cdvj"] Mar 07 08:09:51 crc kubenswrapper[4761]: W0307 08:09:51.104359 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8beb56b5_ab82_42d2_ab67_94e2daa1e0cf.slice/crio-2e2be1a8400eca045411b08be1637175746ef4ca4e34d9a0c70f57538db17f95 WatchSource:0}: Error finding container 2e2be1a8400eca045411b08be1637175746ef4ca4e34d9a0c70f57538db17f95: Status 404 returned error can't find the container with id 2e2be1a8400eca045411b08be1637175746ef4ca4e34d9a0c70f57538db17f95 Mar 07 08:09:51 crc kubenswrapper[4761]: I0307 08:09:51.601366 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" event={"ID":"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf","Type":"ContainerStarted","Data":"2e2be1a8400eca045411b08be1637175746ef4ca4e34d9a0c70f57538db17f95"} Mar 07 08:09:51 crc kubenswrapper[4761]: I0307 08:09:51.602282 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" event={"ID":"ab01a96c-cf26-461f-b358-3ab6603ac44b","Type":"ContainerStarted","Data":"855102b150698b57ba7c9473297e3dfec8e7b0151e1a091b31c2e7792371c9c6"} Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.250078 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q29nq"] Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.300202 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lmkd6"] Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.309403 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.322378 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lmkd6"] Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.502647 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2dlj\" (UniqueName: \"kubernetes.io/projected/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-kube-api-access-c2dlj\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.503142 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.503179 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-config\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.563537 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7cdvj"] Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.584656 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cqd72"] Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.599039 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cqd72"] Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.599158 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.604910 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.604969 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-config\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.605031 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2dlj\" (UniqueName: \"kubernetes.io/projected/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-kube-api-access-c2dlj\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.606003 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.606449 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-config\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.632558 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2dlj\" (UniqueName: \"kubernetes.io/projected/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-kube-api-access-c2dlj\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.640004 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.708772 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.709207 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxjbg\" (UniqueName: \"kubernetes.io/projected/43253af6-83ba-4b96-8907-7294c07c4185-kube-api-access-wxjbg\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.709292 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-config\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.810529 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.810586 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxjbg\" (UniqueName: \"kubernetes.io/projected/43253af6-83ba-4b96-8907-7294c07c4185-kube-api-access-wxjbg\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.810678 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-config\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.811959 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.812037 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-config\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.832789 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxjbg\" (UniqueName: \"kubernetes.io/projected/43253af6-83ba-4b96-8907-7294c07c4185-kube-api-access-wxjbg\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.918981 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.244702 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lmkd6"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.410940 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cqd72"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.447694 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.449444 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.451982 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.452299 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.451980 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.452532 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xhskz" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.452586 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.452763 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.456701 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.457340 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.625803 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2f3dec-2838-4d30-93c2-631da252cdb7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.625994 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.626033 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.626151 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.626196 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.626257 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2f3dec-2838-4d30-93c2-631da252cdb7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.626322 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.626351 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.629541 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjr25\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-kube-api-access-gjr25\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.629584 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.629638 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.715291 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" event={"ID":"43253af6-83ba-4b96-8907-7294c07c4185","Type":"ContainerStarted","Data":"ce21de049c18f84f8b407a385c61cdc9165ad54a30750d97e3860eec6a5d7040"} Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.718753 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" event={"ID":"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f","Type":"ContainerStarted","Data":"b5fbbc13dbf55e476e8f5fa6e8f3f629fc09303ea70bc234446b8082ea16b4f0"} Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737196 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737234 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737259 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737293 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737328 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2f3dec-2838-4d30-93c2-631da252cdb7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737353 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737375 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737408 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjr25\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-kube-api-access-gjr25\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737427 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737451 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737529 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2f3dec-2838-4d30-93c2-631da252cdb7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.740435 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.742498 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.747045 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.747080 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2f3dec-2838-4d30-93c2-631da252cdb7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.751043 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.751669 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.753448 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.753926 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.755349 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.755391 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/860627d4bd50531ff33cb398731d7440ae9b5625a3c0a76764756dbab322d2ce/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.764108 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2f3dec-2838-4d30-93c2-631da252cdb7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.767416 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.770873 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.771515 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjr25\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-kube-api-access-gjr25\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.772983 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.773156 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.773277 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ndxcd" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.774977 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.780457 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.780661 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.780939 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.794437 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.806452 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.808290 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.816405 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.817906 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.825162 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.879738 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.893796 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947594 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947658 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-config-data\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947697 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947743 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947789 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947815 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n62hs\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-kube-api-access-n62hs\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947850 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947871 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krzrn\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-kube-api-access-krzrn\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947904 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947940 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/663244dc-847b-4dda-9c2c-4cae23e48e64-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947986 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948006 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-config-data\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948069 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948095 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948116 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49dec540-e872-432f-bffe-1b0380ac0082-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948145 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948184 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948236 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948259 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948284 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948304 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948327 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76f82\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-kube-api-access-76f82\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948349 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-server-conf\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948375 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948409 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948432 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948462 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948498 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948522 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948544 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-config-data\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948569 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948588 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/663244dc-847b-4dda-9c2c-4cae23e48e64-pod-info\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948616 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49dec540-e872-432f-bffe-1b0380ac0082-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.051168 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.051241 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.051836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.051859 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052358 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.051874 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76f82\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-kube-api-access-76f82\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052553 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-server-conf\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052652 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052749 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052813 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052847 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052937 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052961 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.053014 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-config-data\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.053452 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.053839 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.053945 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/663244dc-847b-4dda-9c2c-4cae23e48e64-pod-info\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.053962 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.054378 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49dec540-e872-432f-bffe-1b0380ac0082-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.054836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.054880 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.054963 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-config-data\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055018 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055053 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055354 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055380 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n62hs\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-kube-api-access-n62hs\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055408 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055425 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krzrn\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-kube-api-access-krzrn\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055455 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055479 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/663244dc-847b-4dda-9c2c-4cae23e48e64-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055620 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055650 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-config-data\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.056401 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.056439 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.056580 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49dec540-e872-432f-bffe-1b0380ac0082-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.056608 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.057643 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.058540 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49dec540-e872-432f-bffe-1b0380ac0082-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.058897 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.065983 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.068281 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49dec540-e872-432f-bffe-1b0380ac0082-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.068612 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.069627 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-server-conf\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.069742 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.069986 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-config-data\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.071098 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.077405 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.077455 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0e63d5dfd4825d4df4a1fd6592e0e906350781786a587f415bb4549b05f1b05e/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.078259 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.078319 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/df547fdc21673de1cc702cfc619e77e1e5934613434f5da0c9db8a26fc9b248e/globalmount\"" pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.081524 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.081564 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b440d898d7256a75603c2b0b9c323ce660ab24929494b6992860ef443ff68edd/globalmount\"" pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.082589 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.089812 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.135389 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/663244dc-847b-4dda-9c2c-4cae23e48e64-pod-info\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.135921 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-config-data\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.136164 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.136241 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.136943 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/663244dc-847b-4dda-9c2c-4cae23e48e64-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.137360 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.137431 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.137678 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.137746 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.138311 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.138319 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-config-data\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.139466 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.141496 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n62hs\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-kube-api-access-n62hs\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.141881 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76f82\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-kube-api-access-76f82\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.142499 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krzrn\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-kube-api-access-krzrn\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.144780 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.148623 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.173111 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.180365 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.199031 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.222903 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.232509 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.720422 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.722813 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.723136 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.731227 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.731469 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.731879 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vjk5b" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.732444 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.737026 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.771993 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85cbz\" (UniqueName: \"kubernetes.io/projected/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-kube-api-access-85cbz\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.772384 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.772433 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.772467 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-config-data-default\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.772514 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-kolla-config\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.772537 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.772553 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.772656 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.874953 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85cbz\" (UniqueName: \"kubernetes.io/projected/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-kube-api-access-85cbz\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.875660 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.875732 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.875785 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-config-data-default\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.875825 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-kolla-config\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.875898 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.875932 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.876079 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.882677 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-kolla-config\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.882921 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-config-data-default\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.882928 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.883192 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.890469 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.891637 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.891727 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/86a73356adf3cc0941e2ebb82fdb511ee83184512e167295b86f1f349220168c/globalmount\"" pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.893651 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85cbz\" (UniqueName: \"kubernetes.io/projected/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-kube-api-access-85cbz\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.914920 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.932513 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.054216 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.315761 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.333422 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.361319 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.381778 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:09:56 crc kubenswrapper[4761]: W0307 08:09:56.439029 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc2f3dec_2838_4d30_93c2_631da252cdb7.slice/crio-f4fb3120122a5372512b2b348c9d0b61b0cb91030e2f3a5d057787a248ed6391 WatchSource:0}: Error finding container f4fb3120122a5372512b2b348c9d0b61b0cb91030e2f3a5d057787a248ed6391: Status 404 returned error can't find the container with id f4fb3120122a5372512b2b348c9d0b61b0cb91030e2f3a5d057787a248ed6391 Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.764330 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"663244dc-847b-4dda-9c2c-4cae23e48e64","Type":"ContainerStarted","Data":"1abab7db156cafa869043228964f8c2a04ac722a8f9439b7f2f97babcd69aa26"} Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.767125 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49dec540-e872-432f-bffe-1b0380ac0082","Type":"ContainerStarted","Data":"256a7517664626ead142d4d5dec2607a661a8459a086b7a664b53dd69f9b3663"} Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.769772 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc2f3dec-2838-4d30-93c2-631da252cdb7","Type":"ContainerStarted","Data":"f4fb3120122a5372512b2b348c9d0b61b0cb91030e2f3a5d057787a248ed6391"} Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.772601 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7201e0b2-1f44-45f0-b746-b98f8cb01f8f","Type":"ContainerStarted","Data":"8ff2eb14f63926a2787b9edf0a4314c17464aa3f349344a0ae0be7df60f72ec1"} Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.852393 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 07 08:09:56 crc kubenswrapper[4761]: W0307 08:09:56.873297 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbb3bcbc_7017_4ec9_875d_d8dfc0baafbe.slice/crio-415c118386826c0ef50b1bccb030c595c3ff817c4af667339dbb20c50d04a503 WatchSource:0}: Error finding container 415c118386826c0ef50b1bccb030c595c3ff817c4af667339dbb20c50d04a503: Status 404 returned error can't find the container with id 415c118386826c0ef50b1bccb030c595c3ff817c4af667339dbb20c50d04a503 Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.103780 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.105326 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.109664 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6gnr4" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.111179 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.111950 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.119849 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.164799 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.220774 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.220866 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.220896 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.220936 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0ccb6a-6367-409b-b996-4946fa2c8981-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.221077 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9f0ccb6a-6367-409b-b996-4946fa2c8981-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.221162 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.221286 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0ccb6a-6367-409b-b996-4946fa2c8981-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.221382 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sdn2\" (UniqueName: \"kubernetes.io/projected/9f0ccb6a-6367-409b-b996-4946fa2c8981-kube-api-access-2sdn2\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.322769 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.322830 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.322865 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0ccb6a-6367-409b-b996-4946fa2c8981-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.322908 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9f0ccb6a-6367-409b-b996-4946fa2c8981-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.322955 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.323025 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0ccb6a-6367-409b-b996-4946fa2c8981-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.323074 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sdn2\" (UniqueName: \"kubernetes.io/projected/9f0ccb6a-6367-409b-b996-4946fa2c8981-kube-api-access-2sdn2\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.323121 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.324059 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.324252 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.327483 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9f0ccb6a-6367-409b-b996-4946fa2c8981-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.327926 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.330103 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0ccb6a-6367-409b-b996-4946fa2c8981-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.333249 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0ccb6a-6367-409b-b996-4946fa2c8981-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.338231 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.338286 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3ec9b84e0a56e4325ed7f36030528850ceb5c9f38086f74cfa38f4d2e88be1e3/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.344386 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sdn2\" (UniqueName: \"kubernetes.io/projected/9f0ccb6a-6367-409b-b996-4946fa2c8981-kube-api-access-2sdn2\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.437325 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.448429 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.448537 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.452592 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kdkld" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.452767 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.452877 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.453487 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.526995 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4e95617-c055-4b9f-ac38-32a41c2e8846-kolla-config\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.527045 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx5hj\" (UniqueName: \"kubernetes.io/projected/d4e95617-c055-4b9f-ac38-32a41c2e8846-kube-api-access-qx5hj\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.527073 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e95617-c055-4b9f-ac38-32a41c2e8846-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.527108 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e95617-c055-4b9f-ac38-32a41c2e8846-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.527184 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4e95617-c055-4b9f-ac38-32a41c2e8846-config-data\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.630112 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4e95617-c055-4b9f-ac38-32a41c2e8846-kolla-config\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.630182 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx5hj\" (UniqueName: \"kubernetes.io/projected/d4e95617-c055-4b9f-ac38-32a41c2e8846-kube-api-access-qx5hj\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.630225 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e95617-c055-4b9f-ac38-32a41c2e8846-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.630277 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e95617-c055-4b9f-ac38-32a41c2e8846-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.630346 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4e95617-c055-4b9f-ac38-32a41c2e8846-config-data\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.631659 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4e95617-c055-4b9f-ac38-32a41c2e8846-config-data\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.633815 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4e95617-c055-4b9f-ac38-32a41c2e8846-kolla-config\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.640075 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e95617-c055-4b9f-ac38-32a41c2e8846-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.654063 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx5hj\" (UniqueName: \"kubernetes.io/projected/d4e95617-c055-4b9f-ac38-32a41c2e8846-kube-api-access-qx5hj\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.668534 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e95617-c055-4b9f-ac38-32a41c2e8846-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.733752 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.785316 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe","Type":"ContainerStarted","Data":"415c118386826c0ef50b1bccb030c595c3ff817c4af667339dbb20c50d04a503"} Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.816222 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 07 08:09:58 crc kubenswrapper[4761]: I0307 08:09:58.444480 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 07 08:09:58 crc kubenswrapper[4761]: I0307 08:09:58.594302 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.096291 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.097771 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.112635 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-27ft9" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.117919 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.229427 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547850-g6d9p"] Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.232221 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547850-g6d9p" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.239448 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.239610 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.239738 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.240842 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtf2\" (UniqueName: \"kubernetes.io/projected/813224b8-8c59-4153-b642-5ee9da95777d-kube-api-access-bhtf2\") pod \"kube-state-metrics-0\" (UID: \"813224b8-8c59-4153-b642-5ee9da95777d\") " pod="openstack/kube-state-metrics-0" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.243960 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547850-g6d9p"] Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.344386 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhtf2\" (UniqueName: \"kubernetes.io/projected/813224b8-8c59-4153-b642-5ee9da95777d-kube-api-access-bhtf2\") pod \"kube-state-metrics-0\" (UID: \"813224b8-8c59-4153-b642-5ee9da95777d\") " pod="openstack/kube-state-metrics-0" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.344573 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdqtk\" (UniqueName: \"kubernetes.io/projected/3f1c6039-d723-41f6-a7a2-42f53281a5fa-kube-api-access-cdqtk\") pod \"auto-csr-approver-29547850-g6d9p\" (UID: \"3f1c6039-d723-41f6-a7a2-42f53281a5fa\") " pod="openshift-infra/auto-csr-approver-29547850-g6d9p" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.373876 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhtf2\" (UniqueName: \"kubernetes.io/projected/813224b8-8c59-4153-b642-5ee9da95777d-kube-api-access-bhtf2\") pod \"kube-state-metrics-0\" (UID: \"813224b8-8c59-4153-b642-5ee9da95777d\") " pod="openstack/kube-state-metrics-0" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.450886 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdqtk\" (UniqueName: \"kubernetes.io/projected/3f1c6039-d723-41f6-a7a2-42f53281a5fa-kube-api-access-cdqtk\") pod \"auto-csr-approver-29547850-g6d9p\" (UID: \"3f1c6039-d723-41f6-a7a2-42f53281a5fa\") " pod="openshift-infra/auto-csr-approver-29547850-g6d9p" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.452287 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.489351 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdqtk\" (UniqueName: \"kubernetes.io/projected/3f1c6039-d723-41f6-a7a2-42f53281a5fa-kube-api-access-cdqtk\") pod \"auto-csr-approver-29547850-g6d9p\" (UID: \"3f1c6039-d723-41f6-a7a2-42f53281a5fa\") " pod="openshift-infra/auto-csr-approver-29547850-g6d9p" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.554734 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547850-g6d9p" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.874981 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz"] Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.876506 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.878878 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-4692c" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.880064 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.882455 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz"] Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.968093 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glbs4\" (UniqueName: \"kubernetes.io/projected/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-kube-api-access-glbs4\") pod \"observability-ui-dashboards-66cbf594b5-bs4zz\" (UID: \"6a8f8341-0209-4fdd-8fdd-4373ec14e18c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.968537 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-bs4zz\" (UID: \"6a8f8341-0209-4fdd-8fdd-4373ec14e18c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.070420 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glbs4\" (UniqueName: \"kubernetes.io/projected/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-kube-api-access-glbs4\") pod \"observability-ui-dashboards-66cbf594b5-bs4zz\" (UID: \"6a8f8341-0209-4fdd-8fdd-4373ec14e18c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.070624 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-bs4zz\" (UID: \"6a8f8341-0209-4fdd-8fdd-4373ec14e18c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:01 crc kubenswrapper[4761]: E0307 08:10:01.070761 4761 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Mar 07 08:10:01 crc kubenswrapper[4761]: E0307 08:10:01.070817 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-serving-cert podName:6a8f8341-0209-4fdd-8fdd-4373ec14e18c nodeName:}" failed. No retries permitted until 2026-03-07 08:10:01.57079956 +0000 UTC m=+1258.479966035 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-serving-cert") pod "observability-ui-dashboards-66cbf594b5-bs4zz" (UID: "6a8f8341-0209-4fdd-8fdd-4373ec14e18c") : secret "observability-ui-dashboards" not found Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.117685 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glbs4\" (UniqueName: \"kubernetes.io/projected/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-kube-api-access-glbs4\") pod \"observability-ui-dashboards-66cbf594b5-bs4zz\" (UID: \"6a8f8341-0209-4fdd-8fdd-4373ec14e18c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.220891 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-56dd85c946-zcd4c"] Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.222371 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.240329 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56dd85c946-zcd4c"] Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.274684 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-serving-cert\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.274754 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2mqj\" (UniqueName: \"kubernetes.io/projected/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-kube-api-access-d2mqj\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.274820 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-oauth-serving-cert\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.274844 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-oauth-config\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.274862 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-trusted-ca-bundle\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.274876 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-config\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.274922 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-service-ca\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.308804 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.311232 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.314934 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.331062 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.332494 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.333120 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.333273 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.333359 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bct6h" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.333379 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.333465 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.343469 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376182 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376228 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-config\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376257 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-service-ca\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376438 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-serving-cert\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376473 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376521 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2mqj\" (UniqueName: \"kubernetes.io/projected/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-kube-api-access-d2mqj\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376538 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376575 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af7db490-ce95-4946-b358-c248703a4a53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376609 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376661 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376696 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-oauth-serving-cert\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376741 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-oauth-config\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376765 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-trusted-ca-bundle\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376781 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-config\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376800 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g8mx\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-kube-api-access-9g8mx\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376832 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376848 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.377109 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-service-ca\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.377522 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-oauth-serving-cert\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.378011 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-config\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.378375 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-trusted-ca-bundle\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.381599 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-serving-cert\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.392503 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-oauth-config\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.396008 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2mqj\" (UniqueName: \"kubernetes.io/projected/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-kube-api-access-d2mqj\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478431 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g8mx\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-kube-api-access-9g8mx\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478493 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478513 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478548 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478567 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-config\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478638 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478672 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478698 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af7db490-ce95-4946-b358-c248703a4a53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478744 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478779 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.482058 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.482232 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.485753 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.486896 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.487162 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.487528 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-config\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.488873 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.489020 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0efec040dc2ef2408d0699e8dc67045c63207730fe365a5f7d021c687807de92/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.490551 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af7db490-ce95-4946-b358-c248703a4a53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.503590 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g8mx\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-kube-api-access-9g8mx\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.505662 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.555195 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.559250 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.581333 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-bs4zz\" (UID: \"6a8f8341-0209-4fdd-8fdd-4373ec14e18c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.607290 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-bs4zz\" (UID: \"6a8f8341-0209-4fdd-8fdd-4373ec14e18c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.659582 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.803030 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.694888 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wq5n6"] Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.696943 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.699343 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-jcb4v" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.699602 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.699753 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.734235 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-run-ovn\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.734267 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-run\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.734286 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-ovn-controller-tls-certs\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.734313 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-combined-ca-bundle\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.734345 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-log-ovn\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.734374 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dbxl\" (UniqueName: \"kubernetes.io/projected/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-kube-api-access-6dbxl\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.734400 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-scripts\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.736912 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-blwhr"] Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.739179 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.767337 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wq5n6"] Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.779127 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-blwhr"] Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.835752 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-run-ovn\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836030 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-run\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836057 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-ovn-controller-tls-certs\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836080 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-etc-ovs\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836117 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-combined-ca-bundle\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836152 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-log-ovn\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836181 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dbxl\" (UniqueName: \"kubernetes.io/projected/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-kube-api-access-6dbxl\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836204 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-scripts\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836229 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf92b-670b-42be-bea0-082d948e2bef-scripts\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836262 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crxp8\" (UniqueName: \"kubernetes.io/projected/7edcf92b-670b-42be-bea0-082d948e2bef-kube-api-access-crxp8\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836288 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-lib\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836353 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-log\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836417 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-run\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836894 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-run-ovn\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.837907 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-run\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.838121 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-scripts\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.840584 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-log-ovn\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.841464 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-ovn-controller-tls-certs\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.844699 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-combined-ca-bundle\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.862455 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dbxl\" (UniqueName: \"kubernetes.io/projected/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-kube-api-access-6dbxl\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.942409 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crxp8\" (UniqueName: \"kubernetes.io/projected/7edcf92b-670b-42be-bea0-082d948e2bef-kube-api-access-crxp8\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.942506 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-lib\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.942575 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-log\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.942632 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-run\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.942760 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-etc-ovs\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.942907 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf92b-670b-42be-bea0-082d948e2bef-scripts\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.943652 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-lib\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.943801 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-log\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.943868 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-run\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.944007 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-etc-ovs\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.948404 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf92b-670b-42be-bea0-082d948e2bef-scripts\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.987849 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crxp8\" (UniqueName: \"kubernetes.io/projected/7edcf92b-670b-42be-bea0-082d948e2bef-kube-api-access-crxp8\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.038374 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.061895 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.346993 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.348690 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.350523 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.355971 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-2tk42" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.356308 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.356470 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.356594 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.361868 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.471466 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.473239 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8327390a-a37e-4c5f-9662-88cd5b832a3d-config\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.473404 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.473546 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.473628 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.473665 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8327390a-a37e-4c5f-9662-88cd5b832a3d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.473734 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8327390a-a37e-4c5f-9662-88cd5b832a3d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.473832 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkfb5\" (UniqueName: \"kubernetes.io/projected/8327390a-a37e-4c5f-9662-88cd5b832a3d-kube-api-access-tkfb5\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.575860 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.575928 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.575964 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.575982 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8327390a-a37e-4c5f-9662-88cd5b832a3d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.576010 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8327390a-a37e-4c5f-9662-88cd5b832a3d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.576043 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkfb5\" (UniqueName: \"kubernetes.io/projected/8327390a-a37e-4c5f-9662-88cd5b832a3d-kube-api-access-tkfb5\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.576092 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.576140 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8327390a-a37e-4c5f-9662-88cd5b832a3d-config\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.576623 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8327390a-a37e-4c5f-9662-88cd5b832a3d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.577103 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8327390a-a37e-4c5f-9662-88cd5b832a3d-config\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.578115 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8327390a-a37e-4c5f-9662-88cd5b832a3d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.580303 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.580518 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.580566 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/837f77c39a1fd78b97d8d59db16bf033c6de6ff919f406ccfd9b7befdaf45e5a/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.583106 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.596026 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkfb5\" (UniqueName: \"kubernetes.io/projected/8327390a-a37e-4c5f-9662-88cd5b832a3d-kube-api-access-tkfb5\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.609313 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.613214 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.686906 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-2tk42" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.694977 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:06 crc kubenswrapper[4761]: I0307 08:10:06.876819 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 07 08:10:06 crc kubenswrapper[4761]: I0307 08:10:06.879689 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:06 crc kubenswrapper[4761]: I0307 08:10:06.882080 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 07 08:10:06 crc kubenswrapper[4761]: I0307 08:10:06.882195 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-k7j6m" Mar 07 08:10:06 crc kubenswrapper[4761]: I0307 08:10:06.882372 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 07 08:10:06 crc kubenswrapper[4761]: I0307 08:10:06.882372 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 07 08:10:06 crc kubenswrapper[4761]: I0307 08:10:06.887692 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.075185 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/97d68716-6a14-491d-8f4c-c3884ce45af4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.075239 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.075270 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d68716-6a14-491d-8f4c-c3884ce45af4-config\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.075292 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8p5f\" (UniqueName: \"kubernetes.io/projected/97d68716-6a14-491d-8f4c-c3884ce45af4-kube-api-access-m8p5f\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.075343 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.075624 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.075994 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97d68716-6a14-491d-8f4c-c3884ce45af4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.076085 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178437 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178557 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178759 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97d68716-6a14-491d-8f4c-c3884ce45af4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178881 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/97d68716-6a14-491d-8f4c-c3884ce45af4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178926 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178965 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d68716-6a14-491d-8f4c-c3884ce45af4-config\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178990 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8p5f\" (UniqueName: \"kubernetes.io/projected/97d68716-6a14-491d-8f4c-c3884ce45af4-kube-api-access-m8p5f\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.179913 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/97d68716-6a14-491d-8f4c-c3884ce45af4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.180542 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d68716-6a14-491d-8f4c-c3884ce45af4-config\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.182337 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97d68716-6a14-491d-8f4c-c3884ce45af4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.182434 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.182496 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/17d016ef9758c3660806df1f86b3de0a2d340a1fce3755b9f9a86ce460525fe8/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.185040 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.185636 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.187403 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.221816 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8p5f\" (UniqueName: \"kubernetes.io/projected/97d68716-6a14-491d-8f4c-c3884ce45af4-kube-api-access-m8p5f\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.222375 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.501490 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:09 crc kubenswrapper[4761]: I0307 08:10:09.982910 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f0ccb6a-6367-409b-b996-4946fa2c8981","Type":"ContainerStarted","Data":"5a88e54647ce6f767b39c65ee3291c9158369ada88993df0b0fb3119f2c6c843"} Mar 07 08:10:10 crc kubenswrapper[4761]: W0307 08:10:10.277913 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4e95617_c055_4b9f_ac38_32a41c2e8846.slice/crio-ee269bb694cac1ab18db5eb87bdd60b2b0d071371ac56d802fc930844d7106f3 WatchSource:0}: Error finding container ee269bb694cac1ab18db5eb87bdd60b2b0d071371ac56d802fc930844d7106f3: Status 404 returned error can't find the container with id ee269bb694cac1ab18db5eb87bdd60b2b0d071371ac56d802fc930844d7106f3 Mar 07 08:10:10 crc kubenswrapper[4761]: I0307 08:10:10.996690 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d4e95617-c055-4b9f-ac38-32a41c2e8846","Type":"ContainerStarted","Data":"ee269bb694cac1ab18db5eb87bdd60b2b0d071371ac56d802fc930844d7106f3"} Mar 07 08:10:13 crc kubenswrapper[4761]: E0307 08:10:13.344029 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 07 08:10:13 crc kubenswrapper[4761]: E0307 08:10:13.344530 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gjr25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(bc2f3dec-2838-4d30-93c2-631da252cdb7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:10:13 crc kubenswrapper[4761]: E0307 08:10:13.346294 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" Mar 07 08:10:14 crc kubenswrapper[4761]: E0307 08:10:14.030754 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" Mar 07 08:10:15 crc kubenswrapper[4761]: E0307 08:10:15.892381 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 07 08:10:15 crc kubenswrapper[4761]: E0307 08:10:15.892677 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krzrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(7201e0b2-1f44-45f0-b746-b98f8cb01f8f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:10:15 crc kubenswrapper[4761]: E0307 08:10:15.893985 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.258157 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.258684 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxld6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-7cdvj_openstack(8beb56b5-ab82-42d2-ab67-94e2daa1e0cf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.260030 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" podUID="8beb56b5-ab82-42d2-ab67-94e2daa1e0cf" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.298069 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.298224 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxjbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-cqd72_openstack(43253af6-83ba-4b96-8907-7294c07c4185): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.299540 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" podUID="43253af6-83ba-4b96-8907-7294c07c4185" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.314579 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.314741 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6qk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-q29nq_openstack(ab01a96c-cf26-461f-b358-3ab6603ac44b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.319966 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" podUID="ab01a96c-cf26-461f-b358-3ab6603ac44b" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.336306 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.336452 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2dlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-lmkd6_openstack(c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.338400 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.097654 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wq5n6"] Mar 07 08:10:23 crc kubenswrapper[4761]: W0307 08:10:23.103173 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c5d5a2b_fc39_4df1_8f46_e399a5e66a0d.slice/crio-e2e232aee1bba50f0525b400fd853ca9b039242c1ec129a804a7fce15c1b26fd WatchSource:0}: Error finding container e2e232aee1bba50f0525b400fd853ca9b039242c1ec129a804a7fce15c1b26fd: Status 404 returned error can't find the container with id e2e232aee1bba50f0525b400fd853ca9b039242c1ec129a804a7fce15c1b26fd Mar 07 08:10:23 crc kubenswrapper[4761]: W0307 08:10:23.104124 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bf201ac_6f66_42fb_83bd_d5faaf6dd126.slice/crio-969e35180a7ab28d6358fb368e966f32c5a485420b0af7f5ab25f3b23bdc288f WatchSource:0}: Error finding container 969e35180a7ab28d6358fb368e966f32c5a485420b0af7f5ab25f3b23bdc288f: Status 404 returned error can't find the container with id 969e35180a7ab28d6358fb368e966f32c5a485420b0af7f5ab25f3b23bdc288f Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.106524 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56dd85c946-zcd4c"] Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.120914 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe","Type":"ContainerStarted","Data":"e2fd399761fd80116f1c5d796a4cd59bbf5f67c3ab7fc55ad76520080b0ca7eb"} Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.122477 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56dd85c946-zcd4c" event={"ID":"8bf201ac-6f66-42fb-83bd-d5faaf6dd126","Type":"ContainerStarted","Data":"969e35180a7ab28d6358fb368e966f32c5a485420b0af7f5ab25f3b23bdc288f"} Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.123490 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wq5n6" event={"ID":"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d","Type":"ContainerStarted","Data":"e2e232aee1bba50f0525b400fd853ca9b039242c1ec129a804a7fce15c1b26fd"} Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.125183 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d4e95617-c055-4b9f-ac38-32a41c2e8846","Type":"ContainerStarted","Data":"f23bd0112702d9189e6097cdac58bfb3285a865a88ecaac0364243f293e86c29"} Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.125285 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.128449 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f0ccb6a-6367-409b-b996-4946fa2c8981","Type":"ContainerStarted","Data":"2cf56872ac9893d058cbb33aafa63ee26c8effbdbb46579643800e24095b966d"} Mar 07 08:10:23 crc kubenswrapper[4761]: E0307 08:10:23.130088 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" podUID="43253af6-83ba-4b96-8907-7294c07c4185" Mar 07 08:10:23 crc kubenswrapper[4761]: E0307 08:10:23.130794 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.234326 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.241416 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.118097036 podStartE2EDuration="26.241399078s" podCreationTimestamp="2026-03-07 08:09:57 +0000 UTC" firstStartedPulling="2026-03-07 08:10:10.282509729 +0000 UTC m=+1267.191676204" lastFinishedPulling="2026-03-07 08:10:22.405811771 +0000 UTC m=+1279.314978246" observedRunningTime="2026-03-07 08:10:23.218775633 +0000 UTC m=+1280.127942108" watchObservedRunningTime="2026-03-07 08:10:23.241399078 +0000 UTC m=+1280.150565553" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.521590 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz"] Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.543578 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.558994 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547850-g6d9p"] Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.686374 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-blwhr"] Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.872883 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.880124 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.972438 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6qk9\" (UniqueName: \"kubernetes.io/projected/ab01a96c-cf26-461f-b358-3ab6603ac44b-kube-api-access-c6qk9\") pod \"ab01a96c-cf26-461f-b358-3ab6603ac44b\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.972555 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-dns-svc\") pod \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.972618 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab01a96c-cf26-461f-b358-3ab6603ac44b-config\") pod \"ab01a96c-cf26-461f-b358-3ab6603ac44b\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.972680 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxld6\" (UniqueName: \"kubernetes.io/projected/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-kube-api-access-mxld6\") pod \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.972871 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-config\") pod \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.973658 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-config" (OuterVolumeSpecName: "config") pod "8beb56b5-ab82-42d2-ab67-94e2daa1e0cf" (UID: "8beb56b5-ab82-42d2-ab67-94e2daa1e0cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.974047 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab01a96c-cf26-461f-b358-3ab6603ac44b-config" (OuterVolumeSpecName: "config") pod "ab01a96c-cf26-461f-b358-3ab6603ac44b" (UID: "ab01a96c-cf26-461f-b358-3ab6603ac44b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.974206 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8beb56b5-ab82-42d2-ab67-94e2daa1e0cf" (UID: "8beb56b5-ab82-42d2-ab67-94e2daa1e0cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.980954 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-kube-api-access-mxld6" (OuterVolumeSpecName: "kube-api-access-mxld6") pod "8beb56b5-ab82-42d2-ab67-94e2daa1e0cf" (UID: "8beb56b5-ab82-42d2-ab67-94e2daa1e0cf"). InnerVolumeSpecName "kube-api-access-mxld6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.980992 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab01a96c-cf26-461f-b358-3ab6603ac44b-kube-api-access-c6qk9" (OuterVolumeSpecName: "kube-api-access-c6qk9") pod "ab01a96c-cf26-461f-b358-3ab6603ac44b" (UID: "ab01a96c-cf26-461f-b358-3ab6603ac44b"). InnerVolumeSpecName "kube-api-access-c6qk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.074671 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6qk9\" (UniqueName: \"kubernetes.io/projected/ab01a96c-cf26-461f-b358-3ab6603ac44b-kube-api-access-c6qk9\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.074734 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.074751 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab01a96c-cf26-461f-b358-3ab6603ac44b-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.074764 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxld6\" (UniqueName: \"kubernetes.io/projected/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-kube-api-access-mxld6\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.074776 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.139218 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49dec540-e872-432f-bffe-1b0380ac0082","Type":"ContainerStarted","Data":"9ee7ce9221a6be795722d6e5f52ae5f0c03c8d8b610024b67cfd95e5744149c2"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.142917 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.142993 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" event={"ID":"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf","Type":"ContainerDied","Data":"2e2be1a8400eca045411b08be1637175746ef4ca4e34d9a0c70f57538db17f95"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.144693 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"813224b8-8c59-4153-b642-5ee9da95777d","Type":"ContainerStarted","Data":"989e755014017208d03dbc74013c0dbc3eb2d3cb892edef48a2df938485c63cc"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.146331 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"663244dc-847b-4dda-9c2c-4cae23e48e64","Type":"ContainerStarted","Data":"cac1f058abec00ed564c939ed9e3b5f26abb1b9f3f9688745486b048618d23c8"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.147963 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-blwhr" event={"ID":"7edcf92b-670b-42be-bea0-082d948e2bef","Type":"ContainerStarted","Data":"18d84c90d175af1c9280d9e7acf1a5d9449659687154db191b085e3436b776cf"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.149355 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7201e0b2-1f44-45f0-b746-b98f8cb01f8f","Type":"ContainerStarted","Data":"1e506ba29675507705351ff4dddbabf2575095cb15dab3309deefdd45c364615"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.151596 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerStarted","Data":"c763d26a506e8b9f5808c71df3c7678c3fb50676b34ea74d7614233d21c5de8d"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.154001 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" event={"ID":"ab01a96c-cf26-461f-b358-3ab6603ac44b","Type":"ContainerDied","Data":"855102b150698b57ba7c9473297e3dfec8e7b0151e1a091b31c2e7792371c9c6"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.154125 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.174418 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547850-g6d9p" event={"ID":"3f1c6039-d723-41f6-a7a2-42f53281a5fa","Type":"ContainerStarted","Data":"d408cf2a167419040c5b22edc9391fa0dd856b9f6e79ca858b184ead6a96d058"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.176438 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" event={"ID":"6a8f8341-0209-4fdd-8fdd-4373ec14e18c","Type":"ContainerStarted","Data":"cf994db254bc3dbc9a18124f07714deb5700cdac62f1be50c2d9b7547e52c51d"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.179775 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56dd85c946-zcd4c" event={"ID":"8bf201ac-6f66-42fb-83bd-d5faaf6dd126","Type":"ContainerStarted","Data":"90b43f5537374f62d1896ed67dec069030abf7d34786b164dff37c567c3d4bb1"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.310240 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.339028 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7cdvj"] Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.351341 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7cdvj"] Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.364651 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-56dd85c946-zcd4c" podStartSLOduration=23.364627467 podStartE2EDuration="23.364627467s" podCreationTimestamp="2026-03-07 08:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:24.287850636 +0000 UTC m=+1281.197017131" watchObservedRunningTime="2026-03-07 08:10:24.364627467 +0000 UTC m=+1281.273793962" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.403544 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q29nq"] Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.451857 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q29nq"] Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.576275 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 07 08:10:25 crc kubenswrapper[4761]: I0307 08:10:25.192849 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8327390a-a37e-4c5f-9662-88cd5b832a3d","Type":"ContainerStarted","Data":"c420ac96c7040ab15f87b290d06c8a569def52001e0545b5bf0e9750ec9afae8"} Mar 07 08:10:25 crc kubenswrapper[4761]: I0307 08:10:25.195299 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"97d68716-6a14-491d-8f4c-c3884ce45af4","Type":"ContainerStarted","Data":"6ecf88f6c9366faa6999428a88bc19570a777b55b9f7460743b795773543a55d"} Mar 07 08:10:25 crc kubenswrapper[4761]: I0307 08:10:25.719175 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8beb56b5-ab82-42d2-ab67-94e2daa1e0cf" path="/var/lib/kubelet/pods/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf/volumes" Mar 07 08:10:25 crc kubenswrapper[4761]: I0307 08:10:25.719693 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab01a96c-cf26-461f-b358-3ab6603ac44b" path="/var/lib/kubelet/pods/ab01a96c-cf26-461f-b358-3ab6603ac44b/volumes" Mar 07 08:10:27 crc kubenswrapper[4761]: I0307 08:10:27.217944 4761 generic.go:334] "Generic (PLEG): container finished" podID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerID="2cf56872ac9893d058cbb33aafa63ee26c8effbdbb46579643800e24095b966d" exitCode=0 Mar 07 08:10:27 crc kubenswrapper[4761]: I0307 08:10:27.218925 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f0ccb6a-6367-409b-b996-4946fa2c8981","Type":"ContainerDied","Data":"2cf56872ac9893d058cbb33aafa63ee26c8effbdbb46579643800e24095b966d"} Mar 07 08:10:27 crc kubenswrapper[4761]: I0307 08:10:27.220860 4761 generic.go:334] "Generic (PLEG): container finished" podID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerID="e2fd399761fd80116f1c5d796a4cd59bbf5f67c3ab7fc55ad76520080b0ca7eb" exitCode=0 Mar 07 08:10:27 crc kubenswrapper[4761]: I0307 08:10:27.220916 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe","Type":"ContainerDied","Data":"e2fd399761fd80116f1c5d796a4cd59bbf5f67c3ab7fc55ad76520080b0ca7eb"} Mar 07 08:10:27 crc kubenswrapper[4761]: I0307 08:10:27.820317 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.341317 4761 generic.go:334] "Generic (PLEG): container finished" podID="3f1c6039-d723-41f6-a7a2-42f53281a5fa" containerID="4601975d730dbd935aa6c0dc81636d749aa74204df5d49980d3658c09cc61dfc" exitCode=0 Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.341765 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547850-g6d9p" event={"ID":"3f1c6039-d723-41f6-a7a2-42f53281a5fa","Type":"ContainerDied","Data":"4601975d730dbd935aa6c0dc81636d749aa74204df5d49980d3658c09cc61dfc"} Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.373285 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cqd72"] Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.421859 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5qlzh"] Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.423528 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.493266 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5qlzh"] Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.578538 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.578956 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-config\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.578980 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjqrt\" (UniqueName: \"kubernetes.io/projected/6e8f6876-f4f5-429e-9908-9b890bd215f7-kube-api-access-mjqrt\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.682013 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-config\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.682087 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjqrt\" (UniqueName: \"kubernetes.io/projected/6e8f6876-f4f5-429e-9908-9b890bd215f7-kube-api-access-mjqrt\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.682226 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.683173 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-config\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.684682 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.873845 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjqrt\" (UniqueName: \"kubernetes.io/projected/6e8f6876-f4f5-429e-9908-9b890bd215f7-kube-api-access-mjqrt\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.088764 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.359571 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe","Type":"ContainerStarted","Data":"bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618"} Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.363253 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" event={"ID":"43253af6-83ba-4b96-8907-7294c07c4185","Type":"ContainerDied","Data":"ce21de049c18f84f8b407a385c61cdc9165ad54a30750d97e3860eec6a5d7040"} Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.363288 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce21de049c18f84f8b407a385c61cdc9165ad54a30750d97e3860eec6a5d7040" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.387135 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=18.278949292 podStartE2EDuration="37.387114674s" podCreationTimestamp="2026-03-07 08:09:54 +0000 UTC" firstStartedPulling="2026-03-07 08:09:56.875619213 +0000 UTC m=+1253.784785688" lastFinishedPulling="2026-03-07 08:10:15.983784595 +0000 UTC m=+1272.892951070" observedRunningTime="2026-03-07 08:10:31.377344546 +0000 UTC m=+1288.286511021" watchObservedRunningTime="2026-03-07 08:10:31.387114674 +0000 UTC m=+1288.296281149" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.528391 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.537201 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.539400 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.539477 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.539594 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.543078 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-2t4xt" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.553149 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.565098 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.565126 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.595819 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.611241 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.611362 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.611405 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc8bs\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-kube-api-access-lc8bs\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.611949 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-lock\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.612077 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.614076 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-cache\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.670628 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5qlzh"] Mar 07 08:10:31 crc kubenswrapper[4761]: W0307 08:10:31.683012 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e8f6876_f4f5_429e_9908_9b890bd215f7.slice/crio-1580f62ed4b835efa056d38801500a326b1b466902057979960f0cd6384ef03c WatchSource:0}: Error finding container 1580f62ed4b835efa056d38801500a326b1b466902057979960f0cd6384ef03c: Status 404 returned error can't find the container with id 1580f62ed4b835efa056d38801500a326b1b466902057979960f0cd6384ef03c Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.722706 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.722773 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.722804 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc8bs\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-kube-api-access-lc8bs\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.722906 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-lock\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.722937 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.722992 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-cache\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.723510 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-cache\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: E0307 08:10:31.723604 4761 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 08:10:31 crc kubenswrapper[4761]: E0307 08:10:31.723617 4761 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 08:10:31 crc kubenswrapper[4761]: E0307 08:10:31.723652 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift podName:c5a46683-9d54-4f8e-909c-e7c5d3e0698f nodeName:}" failed. No retries permitted until 2026-03-07 08:10:32.223638088 +0000 UTC m=+1289.132804563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift") pod "swift-storage-0" (UID: "c5a46683-9d54-4f8e-909c-e7c5d3e0698f") : configmap "swift-ring-files" not found Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.727175 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-lock\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.729414 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.729435 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/551088d031c4b4dfbcbf1279d5bb625792deda6a492aa04fbed06f0f543797f5/globalmount\"" pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.737295 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.744110 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc8bs\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-kube-api-access-lc8bs\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.773793 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.022362 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.067635 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-jqk77"] Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.068888 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.071085 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547850-g6d9p" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.071415 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.071448 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.071590 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.083355 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jqk77"] Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.131752 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-dns-svc\") pod \"43253af6-83ba-4b96-8907-7294c07c4185\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.131842 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-config\") pod \"43253af6-83ba-4b96-8907-7294c07c4185\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.132055 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxjbg\" (UniqueName: \"kubernetes.io/projected/43253af6-83ba-4b96-8907-7294c07c4185-kube-api-access-wxjbg\") pod \"43253af6-83ba-4b96-8907-7294c07c4185\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.132287 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-config" (OuterVolumeSpecName: "config") pod "43253af6-83ba-4b96-8907-7294c07c4185" (UID: "43253af6-83ba-4b96-8907-7294c07c4185"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.132359 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "43253af6-83ba-4b96-8907-7294c07c4185" (UID: "43253af6-83ba-4b96-8907-7294c07c4185"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.133001 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.133027 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.211662 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43253af6-83ba-4b96-8907-7294c07c4185-kube-api-access-wxjbg" (OuterVolumeSpecName: "kube-api-access-wxjbg") pod "43253af6-83ba-4b96-8907-7294c07c4185" (UID: "43253af6-83ba-4b96-8907-7294c07c4185"). InnerVolumeSpecName "kube-api-access-wxjbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.234878 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdqtk\" (UniqueName: \"kubernetes.io/projected/3f1c6039-d723-41f6-a7a2-42f53281a5fa-kube-api-access-cdqtk\") pod \"3f1c6039-d723-41f6-a7a2-42f53281a5fa\" (UID: \"3f1c6039-d723-41f6-a7a2-42f53281a5fa\") " Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235208 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235249 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-combined-ca-bundle\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235317 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-ring-data-devices\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235355 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-swiftconf\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235389 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-dispersionconf\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235415 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-scripts\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235472 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kllgt\" (UniqueName: \"kubernetes.io/projected/34132cc8-6037-4a17-9a58-5736caf6130b-kube-api-access-kllgt\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235496 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34132cc8-6037-4a17-9a58-5736caf6130b-etc-swift\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235555 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxjbg\" (UniqueName: \"kubernetes.io/projected/43253af6-83ba-4b96-8907-7294c07c4185-kube-api-access-wxjbg\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:32 crc kubenswrapper[4761]: E0307 08:10:32.235932 4761 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 08:10:32 crc kubenswrapper[4761]: E0307 08:10:32.235991 4761 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 08:10:32 crc kubenswrapper[4761]: E0307 08:10:32.236039 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift podName:c5a46683-9d54-4f8e-909c-e7c5d3e0698f nodeName:}" failed. No retries permitted until 2026-03-07 08:10:33.23602071 +0000 UTC m=+1290.145187265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift") pod "swift-storage-0" (UID: "c5a46683-9d54-4f8e-909c-e7c5d3e0698f") : configmap "swift-ring-files" not found Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.239788 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1c6039-d723-41f6-a7a2-42f53281a5fa-kube-api-access-cdqtk" (OuterVolumeSpecName: "kube-api-access-cdqtk") pod "3f1c6039-d723-41f6-a7a2-42f53281a5fa" (UID: "3f1c6039-d723-41f6-a7a2-42f53281a5fa"). InnerVolumeSpecName "kube-api-access-cdqtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.338444 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-combined-ca-bundle\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.339437 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-ring-data-devices\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.339499 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-swiftconf\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.339550 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-dispersionconf\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.339595 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-scripts\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.339753 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kllgt\" (UniqueName: \"kubernetes.io/projected/34132cc8-6037-4a17-9a58-5736caf6130b-kube-api-access-kllgt\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.339801 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34132cc8-6037-4a17-9a58-5736caf6130b-etc-swift\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.339938 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdqtk\" (UniqueName: \"kubernetes.io/projected/3f1c6039-d723-41f6-a7a2-42f53281a5fa-kube-api-access-cdqtk\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.340191 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34132cc8-6037-4a17-9a58-5736caf6130b-etc-swift\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.340243 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-ring-data-devices\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.340737 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-scripts\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.343672 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-swiftconf\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.343941 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-dispersionconf\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.351604 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-combined-ca-bundle\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.360305 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kllgt\" (UniqueName: \"kubernetes.io/projected/34132cc8-6037-4a17-9a58-5736caf6130b-kube-api-access-kllgt\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.381808 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547850-g6d9p" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.382659 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547850-g6d9p" event={"ID":"3f1c6039-d723-41f6-a7a2-42f53281a5fa","Type":"ContainerDied","Data":"d408cf2a167419040c5b22edc9391fa0dd856b9f6e79ca858b184ead6a96d058"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.382706 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d408cf2a167419040c5b22edc9391fa0dd856b9f6e79ca858b184ead6a96d058" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.394812 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-blwhr" event={"ID":"7edcf92b-670b-42be-bea0-082d948e2bef","Type":"ContainerStarted","Data":"93f574cfdb4d7d0319065ccfd2009fa13929b9f5e5dbbfd11d17f659ec9d8edb"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.398684 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" event={"ID":"6e8f6876-f4f5-429e-9908-9b890bd215f7","Type":"ContainerStarted","Data":"1580f62ed4b835efa056d38801500a326b1b466902057979960f0cd6384ef03c"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.401670 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc2f3dec-2838-4d30-93c2-631da252cdb7","Type":"ContainerStarted","Data":"89a6b5588731808b0bfe82c5f4e9ce1720f8b54e7fe66d37411578cd9536d97b"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.403640 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8327390a-a37e-4c5f-9662-88cd5b832a3d","Type":"ContainerStarted","Data":"6285a8c6654f966687b8867527f044b68b5968bd5f5aeae739b2c1cde6b9ea8b"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.405300 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"97d68716-6a14-491d-8f4c-c3884ce45af4","Type":"ContainerStarted","Data":"ec1114fa73a89f91939abf908393f205d9105203a7ebb77218da024b7dbdb076"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.405618 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.407843 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"813224b8-8c59-4153-b642-5ee9da95777d","Type":"ContainerStarted","Data":"10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.407975 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.418085 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f0ccb6a-6367-409b-b996-4946fa2c8981","Type":"ContainerStarted","Data":"833704fdf8ae28e1b304b84c220a7f77b10ff62bbb503ec99590b5acc753c1c6"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.422024 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.424931 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" event={"ID":"6a8f8341-0209-4fdd-8fdd-4373ec14e18c","Type":"ContainerStarted","Data":"e471b1ccf94ead465d637a815477c0ad4b397f45f7749b213df0d010ab29d06b"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.440729 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.452044 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=25.947584311 podStartE2EDuration="32.452022791s" podCreationTimestamp="2026-03-07 08:10:00 +0000 UTC" firstStartedPulling="2026-03-07 08:10:23.615989159 +0000 UTC m=+1280.525155634" lastFinishedPulling="2026-03-07 08:10:30.120427599 +0000 UTC m=+1287.029594114" observedRunningTime="2026-03-07 08:10:32.440087947 +0000 UTC m=+1289.349254422" watchObservedRunningTime="2026-03-07 08:10:32.452022791 +0000 UTC m=+1289.361189256" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.533428 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" podStartSLOduration=27.56031864 podStartE2EDuration="32.533406209s" podCreationTimestamp="2026-03-07 08:10:00 +0000 UTC" firstStartedPulling="2026-03-07 08:10:23.616313477 +0000 UTC m=+1280.525479952" lastFinishedPulling="2026-03-07 08:10:28.589401046 +0000 UTC m=+1285.498567521" observedRunningTime="2026-03-07 08:10:32.481707235 +0000 UTC m=+1289.390873710" watchObservedRunningTime="2026-03-07 08:10:32.533406209 +0000 UTC m=+1289.442572684" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.543321 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.666855596 podStartE2EDuration="36.543303111s" podCreationTimestamp="2026-03-07 08:09:56 +0000 UTC" firstStartedPulling="2026-03-07 08:10:09.428563674 +0000 UTC m=+1266.337730199" lastFinishedPulling="2026-03-07 08:10:22.305011219 +0000 UTC m=+1279.214177714" observedRunningTime="2026-03-07 08:10:32.508910507 +0000 UTC m=+1289.418076982" watchObservedRunningTime="2026-03-07 08:10:32.543303111 +0000 UTC m=+1289.452469586" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.624675 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5dd9c59c48-q98tn"] Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.724045 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cqd72"] Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.761595 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cqd72"] Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.144920 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jqk77"] Mar 07 08:10:33 crc kubenswrapper[4761]: W0307 08:10:33.158333 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34132cc8_6037_4a17_9a58_5736caf6130b.slice/crio-279ce1b3aa96e47b801aba7e6ddb05970bd3015519986cd5516b58ff04cf8381 WatchSource:0}: Error finding container 279ce1b3aa96e47b801aba7e6ddb05970bd3015519986cd5516b58ff04cf8381: Status 404 returned error can't find the container with id 279ce1b3aa96e47b801aba7e6ddb05970bd3015519986cd5516b58ff04cf8381 Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.163385 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547844-4dg2j"] Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.174488 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547844-4dg2j"] Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.271303 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:33 crc kubenswrapper[4761]: E0307 08:10:33.271524 4761 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 08:10:33 crc kubenswrapper[4761]: E0307 08:10:33.271552 4761 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 08:10:33 crc kubenswrapper[4761]: E0307 08:10:33.271615 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift podName:c5a46683-9d54-4f8e-909c-e7c5d3e0698f nodeName:}" failed. No retries permitted until 2026-03-07 08:10:35.271595642 +0000 UTC m=+1292.180762117 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift") pod "swift-storage-0" (UID: "c5a46683-9d54-4f8e-909c-e7c5d3e0698f") : configmap "swift-ring-files" not found Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.432437 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerStarted","Data":"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5"} Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.434752 4761 generic.go:334] "Generic (PLEG): container finished" podID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerID="a71c7c3a354307f54d5910f5284820373d2ba892b20b40983d41a6a146a44c75" exitCode=0 Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.435752 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" event={"ID":"6e8f6876-f4f5-429e-9908-9b890bd215f7","Type":"ContainerDied","Data":"a71c7c3a354307f54d5910f5284820373d2ba892b20b40983d41a6a146a44c75"} Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.437106 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jqk77" event={"ID":"34132cc8-6037-4a17-9a58-5736caf6130b","Type":"ContainerStarted","Data":"279ce1b3aa96e47b801aba7e6ddb05970bd3015519986cd5516b58ff04cf8381"} Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.446134 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wq5n6" event={"ID":"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d","Type":"ContainerStarted","Data":"91a739e500eef3bb2d05f3d08f9deb0b35ee47315523efe1d3c963157e0d7c54"} Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.447140 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.466136 4761 generic.go:334] "Generic (PLEG): container finished" podID="7edcf92b-670b-42be-bea0-082d948e2bef" containerID="93f574cfdb4d7d0319065ccfd2009fa13929b9f5e5dbbfd11d17f659ec9d8edb" exitCode=0 Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.466399 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-blwhr" event={"ID":"7edcf92b-670b-42be-bea0-082d948e2bef","Type":"ContainerDied","Data":"93f574cfdb4d7d0319065ccfd2009fa13929b9f5e5dbbfd11d17f659ec9d8edb"} Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.506264 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-wq5n6" podStartSLOduration=25.457800553 podStartE2EDuration="31.506224914s" podCreationTimestamp="2026-03-07 08:10:02 +0000 UTC" firstStartedPulling="2026-03-07 08:10:23.105257498 +0000 UTC m=+1280.014423973" lastFinishedPulling="2026-03-07 08:10:29.153681819 +0000 UTC m=+1286.062848334" observedRunningTime="2026-03-07 08:10:33.502620682 +0000 UTC m=+1290.411787167" watchObservedRunningTime="2026-03-07 08:10:33.506224914 +0000 UTC m=+1290.415391469" Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.722213 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43253af6-83ba-4b96-8907-7294c07c4185" path="/var/lib/kubelet/pods/43253af6-83ba-4b96-8907-7294c07c4185/volumes" Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.722764 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2ec016f-1c81-4af0-8f87-99481163f94c" path="/var/lib/kubelet/pods/a2ec016f-1c81-4af0-8f87-99481163f94c/volumes" Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.480199 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" event={"ID":"6e8f6876-f4f5-429e-9908-9b890bd215f7","Type":"ContainerStarted","Data":"18bd356b27523c6307038934611f40d3e730ca8eb63d6853e0975378361f0131"} Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.481966 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.486227 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-blwhr" event={"ID":"7edcf92b-670b-42be-bea0-082d948e2bef","Type":"ContainerStarted","Data":"65ddc2cc32d21c7323b011d4f18e221cecec1fd58e14edd75f6e01ce4e660245"} Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.486261 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-blwhr" event={"ID":"7edcf92b-670b-42be-bea0-082d948e2bef","Type":"ContainerStarted","Data":"b5bfc50da5db4f103174d5e711a56347234d695be796fd389862f7b1692b94ca"} Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.486303 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.486399 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.496962 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" podStartSLOduration=4.087492838 podStartE2EDuration="4.496948745s" podCreationTimestamp="2026-03-07 08:10:30 +0000 UTC" firstStartedPulling="2026-03-07 08:10:31.687960581 +0000 UTC m=+1288.597127056" lastFinishedPulling="2026-03-07 08:10:32.097416478 +0000 UTC m=+1289.006582963" observedRunningTime="2026-03-07 08:10:34.494204525 +0000 UTC m=+1291.403371020" watchObservedRunningTime="2026-03-07 08:10:34.496948745 +0000 UTC m=+1291.406115220" Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.521949 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-blwhr" podStartSLOduration=27.562356884 podStartE2EDuration="32.52193171s" podCreationTimestamp="2026-03-07 08:10:02 +0000 UTC" firstStartedPulling="2026-03-07 08:10:23.697130701 +0000 UTC m=+1280.606297186" lastFinishedPulling="2026-03-07 08:10:28.656705537 +0000 UTC m=+1285.565872012" observedRunningTime="2026-03-07 08:10:34.513785303 +0000 UTC m=+1291.422951778" watchObservedRunningTime="2026-03-07 08:10:34.52193171 +0000 UTC m=+1291.431098185" Mar 07 08:10:35 crc kubenswrapper[4761]: I0307 08:10:35.319329 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:35 crc kubenswrapper[4761]: E0307 08:10:35.319495 4761 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 08:10:35 crc kubenswrapper[4761]: E0307 08:10:35.319516 4761 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 08:10:35 crc kubenswrapper[4761]: E0307 08:10:35.319566 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift podName:c5a46683-9d54-4f8e-909c-e7c5d3e0698f nodeName:}" failed. No retries permitted until 2026-03-07 08:10:39.319550143 +0000 UTC m=+1296.228716618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift") pod "swift-storage-0" (UID: "c5a46683-9d54-4f8e-909c-e7c5d3e0698f") : configmap "swift-ring-files" not found Mar 07 08:10:36 crc kubenswrapper[4761]: I0307 08:10:36.055424 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 07 08:10:36 crc kubenswrapper[4761]: I0307 08:10:36.056143 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 07 08:10:37 crc kubenswrapper[4761]: I0307 08:10:37.755981 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 07 08:10:37 crc kubenswrapper[4761]: I0307 08:10:37.756396 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.390816 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.507528 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.888818 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5b43-account-create-update-jpq6b"] Mar 07 08:10:38 crc kubenswrapper[4761]: E0307 08:10:38.889339 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1c6039-d723-41f6-a7a2-42f53281a5fa" containerName="oc" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.889355 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1c6039-d723-41f6-a7a2-42f53281a5fa" containerName="oc" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.889625 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1c6039-d723-41f6-a7a2-42f53281a5fa" containerName="oc" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.890452 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.895709 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.900648 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b43-account-create-update-jpq6b"] Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.977285 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-cv77d"] Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.978964 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.994320 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-cv77d"] Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.010063 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-operator-scripts\") pod \"keystone-db-create-cv77d\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.010355 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-operator-scripts\") pod \"keystone-5b43-account-create-update-jpq6b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.010471 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp6zn\" (UniqueName: \"kubernetes.io/projected/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-kube-api-access-lp6zn\") pod \"keystone-5b43-account-create-update-jpq6b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.010576 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4htf\" (UniqueName: \"kubernetes.io/projected/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-kube-api-access-z4htf\") pod \"keystone-db-create-cv77d\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.071149 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-458dc"] Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.072505 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.083396 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-557c-account-create-update-jtvjg"] Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.084955 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.086576 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.097567 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-458dc"] Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.105458 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-557c-account-create-update-jtvjg"] Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.112912 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzx9f\" (UniqueName: \"kubernetes.io/projected/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-kube-api-access-wzx9f\") pod \"placement-db-create-458dc\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.113039 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-operator-scripts\") pod \"keystone-db-create-cv77d\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.113069 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-operator-scripts\") pod \"placement-db-create-458dc\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.113126 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-operator-scripts\") pod \"keystone-5b43-account-create-update-jpq6b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.113165 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp6zn\" (UniqueName: \"kubernetes.io/projected/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-kube-api-access-lp6zn\") pod \"keystone-5b43-account-create-update-jpq6b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.113200 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4htf\" (UniqueName: \"kubernetes.io/projected/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-kube-api-access-z4htf\") pod \"keystone-db-create-cv77d\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.114155 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-operator-scripts\") pod \"keystone-db-create-cv77d\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.114678 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-operator-scripts\") pod \"keystone-5b43-account-create-update-jpq6b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.130662 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4htf\" (UniqueName: \"kubernetes.io/projected/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-kube-api-access-z4htf\") pod \"keystone-db-create-cv77d\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.151476 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp6zn\" (UniqueName: \"kubernetes.io/projected/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-kube-api-access-lp6zn\") pod \"keystone-5b43-account-create-update-jpq6b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.217948 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh4ql\" (UniqueName: \"kubernetes.io/projected/b12971f6-3d67-4225-beab-46d9d3505ae1-kube-api-access-qh4ql\") pod \"placement-557c-account-create-update-jtvjg\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.218066 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-operator-scripts\") pod \"placement-db-create-458dc\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.218278 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzx9f\" (UniqueName: \"kubernetes.io/projected/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-kube-api-access-wzx9f\") pod \"placement-db-create-458dc\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.218371 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12971f6-3d67-4225-beab-46d9d3505ae1-operator-scripts\") pod \"placement-557c-account-create-update-jtvjg\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.218974 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-operator-scripts\") pod \"placement-db-create-458dc\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.219126 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.235766 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzx9f\" (UniqueName: \"kubernetes.io/projected/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-kube-api-access-wzx9f\") pod \"placement-db-create-458dc\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.297872 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.322974 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.323042 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12971f6-3d67-4225-beab-46d9d3505ae1-operator-scripts\") pod \"placement-557c-account-create-update-jtvjg\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.323077 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh4ql\" (UniqueName: \"kubernetes.io/projected/b12971f6-3d67-4225-beab-46d9d3505ae1-kube-api-access-qh4ql\") pod \"placement-557c-account-create-update-jtvjg\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: E0307 08:10:39.324153 4761 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 08:10:39 crc kubenswrapper[4761]: E0307 08:10:39.324174 4761 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 08:10:39 crc kubenswrapper[4761]: E0307 08:10:39.324213 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift podName:c5a46683-9d54-4f8e-909c-e7c5d3e0698f nodeName:}" failed. No retries permitted until 2026-03-07 08:10:47.324197547 +0000 UTC m=+1304.233364022 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift") pod "swift-storage-0" (UID: "c5a46683-9d54-4f8e-909c-e7c5d3e0698f") : configmap "swift-ring-files" not found Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.331293 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12971f6-3d67-4225-beab-46d9d3505ae1-operator-scripts\") pod \"placement-557c-account-create-update-jtvjg\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.350624 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh4ql\" (UniqueName: \"kubernetes.io/projected/b12971f6-3d67-4225-beab-46d9d3505ae1-kube-api-access-qh4ql\") pod \"placement-557c-account-create-update-jtvjg\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.388991 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.426007 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.546725 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" event={"ID":"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f","Type":"ContainerStarted","Data":"a269d72aae7f21a36693603be9bf3e2bdc5f0a95b59c92edc9bb043030d3a13b"} Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.550322 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jqk77" event={"ID":"34132cc8-6037-4a17-9a58-5736caf6130b","Type":"ContainerStarted","Data":"eb24dde25e9feceb32b1e0885d44501fcd066a8b2d11595c82eb5c68daa220aa"} Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.556214 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8327390a-a37e-4c5f-9662-88cd5b832a3d","Type":"ContainerStarted","Data":"0b181bbfbc108dff33a55a6643a0861e2cecaecb3df260c1b26f9f15c7d15da7"} Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.573703 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=19.571425013 podStartE2EDuration="34.573679658s" podCreationTimestamp="2026-03-07 08:10:05 +0000 UTC" firstStartedPulling="2026-03-07 08:10:24.267371825 +0000 UTC m=+1281.176538300" lastFinishedPulling="2026-03-07 08:10:39.26962647 +0000 UTC m=+1296.178792945" observedRunningTime="2026-03-07 08:10:39.566922207 +0000 UTC m=+1296.476088682" watchObservedRunningTime="2026-03-07 08:10:39.573679658 +0000 UTC m=+1296.482846133" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.627041 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-jqk77" podStartSLOduration=1.640041277 podStartE2EDuration="7.627019234s" podCreationTimestamp="2026-03-07 08:10:32 +0000 UTC" firstStartedPulling="2026-03-07 08:10:33.161033152 +0000 UTC m=+1290.070199627" lastFinishedPulling="2026-03-07 08:10:39.148011109 +0000 UTC m=+1296.057177584" observedRunningTime="2026-03-07 08:10:39.590425714 +0000 UTC m=+1296.499592189" watchObservedRunningTime="2026-03-07 08:10:39.627019234 +0000 UTC m=+1296.536185699" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.641331 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=23.2877671 podStartE2EDuration="37.641312187s" podCreationTimestamp="2026-03-07 08:10:02 +0000 UTC" firstStartedPulling="2026-03-07 08:10:24.844587746 +0000 UTC m=+1281.753754221" lastFinishedPulling="2026-03-07 08:10:39.198132833 +0000 UTC m=+1296.107299308" observedRunningTime="2026-03-07 08:10:39.616263591 +0000 UTC m=+1296.525430066" watchObservedRunningTime="2026-03-07 08:10:39.641312187 +0000 UTC m=+1296.550478662" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.696831 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.747270 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.772691 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b43-account-create-update-jpq6b"] Mar 07 08:10:39 crc kubenswrapper[4761]: W0307 08:10:39.925485 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c95a8dd_8ebd_4c6c_a4bb_21181abd3ea0.slice/crio-6de9f64903271fa613ae17739e46bf372fec41fb2cf9a66024b05a7e8afa252f WatchSource:0}: Error finding container 6de9f64903271fa613ae17739e46bf372fec41fb2cf9a66024b05a7e8afa252f: Status 404 returned error can't find the container with id 6de9f64903271fa613ae17739e46bf372fec41fb2cf9a66024b05a7e8afa252f Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.927732 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-cv77d"] Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.008858 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-458dc"] Mar 07 08:10:40 crc kubenswrapper[4761]: W0307 08:10:40.009262 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ecdc2ad_5812_4bb2_a6ea_8659b3993985.slice/crio-a6621923d144c8da576144af2ecf2d1384b1f0e1d56bcb356e01b483f42e2298 WatchSource:0}: Error finding container a6621923d144c8da576144af2ecf2d1384b1f0e1d56bcb356e01b483f42e2298: Status 404 returned error can't find the container with id a6621923d144c8da576144af2ecf2d1384b1f0e1d56bcb356e01b483f42e2298 Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.091692 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-557c-account-create-update-jtvjg"] Mar 07 08:10:40 crc kubenswrapper[4761]: W0307 08:10:40.099682 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb12971f6_3d67_4225_beab_46d9d3505ae1.slice/crio-6ffd98308f17c6b4f6ef32d29b9155db09cf76d98289c555382272f0bba491af WatchSource:0}: Error finding container 6ffd98308f17c6b4f6ef32d29b9155db09cf76d98289c555382272f0bba491af: Status 404 returned error can't find the container with id 6ffd98308f17c6b4f6ef32d29b9155db09cf76d98289c555382272f0bba491af Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.187776 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-b9fmh"] Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.189325 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.213212 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-b9fmh"] Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.243385 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc4048ba-7b5a-48ab-b609-21cc5598d56c-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-b9fmh\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.243471 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wphbs\" (UniqueName: \"kubernetes.io/projected/dc4048ba-7b5a-48ab-b609-21cc5598d56c-kube-api-access-wphbs\") pod \"mysqld-exporter-openstack-db-create-b9fmh\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.340538 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-ee06-account-create-update-s6d4f"] Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.341855 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.344160 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.345246 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdbmq\" (UniqueName: \"kubernetes.io/projected/70c13d8a-a25a-419e-9267-6894a86897cc-kube-api-access-sdbmq\") pod \"mysqld-exporter-ee06-account-create-update-s6d4f\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.345365 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c13d8a-a25a-419e-9267-6894a86897cc-operator-scripts\") pod \"mysqld-exporter-ee06-account-create-update-s6d4f\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.345427 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc4048ba-7b5a-48ab-b609-21cc5598d56c-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-b9fmh\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.346092 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc4048ba-7b5a-48ab-b609-21cc5598d56c-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-b9fmh\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.346149 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wphbs\" (UniqueName: \"kubernetes.io/projected/dc4048ba-7b5a-48ab-b609-21cc5598d56c-kube-api-access-wphbs\") pod \"mysqld-exporter-openstack-db-create-b9fmh\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.372363 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wphbs\" (UniqueName: \"kubernetes.io/projected/dc4048ba-7b5a-48ab-b609-21cc5598d56c-kube-api-access-wphbs\") pod \"mysqld-exporter-openstack-db-create-b9fmh\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.447501 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdbmq\" (UniqueName: \"kubernetes.io/projected/70c13d8a-a25a-419e-9267-6894a86897cc-kube-api-access-sdbmq\") pod \"mysqld-exporter-ee06-account-create-update-s6d4f\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.447576 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c13d8a-a25a-419e-9267-6894a86897cc-operator-scripts\") pod \"mysqld-exporter-ee06-account-create-update-s6d4f\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.448249 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c13d8a-a25a-419e-9267-6894a86897cc-operator-scripts\") pod \"mysqld-exporter-ee06-account-create-update-s6d4f\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.468776 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdbmq\" (UniqueName: \"kubernetes.io/projected/70c13d8a-a25a-419e-9267-6894a86897cc-kube-api-access-sdbmq\") pod \"mysqld-exporter-ee06-account-create-update-s6d4f\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.501968 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.544766 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.573991 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.658328 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.964413 4761 generic.go:334] "Generic (PLEG): container finished" podID="af7db490-ce95-4946-b358-c248703a4a53" containerID="56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5" exitCode=0 Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.964492 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerDied","Data":"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5"} Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.966079 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.968023 4761 generic.go:334] "Generic (PLEG): container finished" podID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerID="a269d72aae7f21a36693603be9bf3e2bdc5f0a95b59c92edc9bb043030d3a13b" exitCode=0 Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.968097 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" event={"ID":"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f","Type":"ContainerDied","Data":"a269d72aae7f21a36693603be9bf3e2bdc5f0a95b59c92edc9bb043030d3a13b"} Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.968127 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" event={"ID":"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f","Type":"ContainerStarted","Data":"c49e7047447bccbddb275f76f211640b1c7d8ba235ea330a2b7265c257f39e83"} Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.969229 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.970931 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-458dc" event={"ID":"9ecdc2ad-5812-4bb2-a6ea-8659b3993985","Type":"ContainerStarted","Data":"b269857bff81c96eb8751012ebe23820ca1ecd1ca87d5b49700f96f3184ec666"} Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.970952 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-458dc" event={"ID":"9ecdc2ad-5812-4bb2-a6ea-8659b3993985","Type":"ContainerStarted","Data":"a6621923d144c8da576144af2ecf2d1384b1f0e1d56bcb356e01b483f42e2298"} Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.981076 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b43-account-create-update-jpq6b" event={"ID":"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b","Type":"ContainerStarted","Data":"92d60cbd1931c0910d9a77a2b32fd62f51cb82efcb041fb0c916607c3418054a"} Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.981279 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b43-account-create-update-jpq6b" event={"ID":"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b","Type":"ContainerStarted","Data":"4c5f15f84b273e19bc69f36a776a438b641c5b3624b97eff112dd2756d9b5eee"} Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.995112 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-ee06-account-create-update-s6d4f"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.001755 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cv77d" event={"ID":"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0","Type":"ContainerStarted","Data":"604a9a3091641041b296f96b4f1d808f47de7c313c82fc9218ada4352b3da08b"} Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.001830 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cv77d" event={"ID":"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0","Type":"ContainerStarted","Data":"6de9f64903271fa613ae17739e46bf372fec41fb2cf9a66024b05a7e8afa252f"} Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.003990 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-557c-account-create-update-jtvjg" event={"ID":"b12971f6-3d67-4225-beab-46d9d3505ae1","Type":"ContainerStarted","Data":"6ffd98308f17c6b4f6ef32d29b9155db09cf76d98289c555382272f0bba491af"} Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.007273 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"97d68716-6a14-491d-8f4c-c3884ce45af4","Type":"ContainerStarted","Data":"2bccf623bfc00cb7dbab0f14818939ff94b6cf6efc5e63bbf74919aa2306a0c7"} Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.009278 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.009373 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.090109 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.093289 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.162199 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.206268 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-458dc" podStartSLOduration=2.206242642 podStartE2EDuration="2.206242642s" podCreationTimestamp="2026-03-07 08:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:41.069730472 +0000 UTC m=+1297.978896947" watchObservedRunningTime="2026-03-07 08:10:41.206242642 +0000 UTC m=+1298.115409107" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.280502 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-cv77d" podStartSLOduration=3.2804723080000002 podStartE2EDuration="3.280472308s" podCreationTimestamp="2026-03-07 08:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:41.115089165 +0000 UTC m=+1298.024255640" watchObservedRunningTime="2026-03-07 08:10:41.280472308 +0000 UTC m=+1298.189638803" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.291957 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-557c-account-create-update-jtvjg" podStartSLOduration=2.29193459 podStartE2EDuration="2.29193459s" podCreationTimestamp="2026-03-07 08:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:41.162219473 +0000 UTC m=+1298.071385968" watchObservedRunningTime="2026-03-07 08:10:41.29193459 +0000 UTC m=+1298.201101085" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.304310 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" podStartSLOduration=-9223371988.550488 podStartE2EDuration="48.304287944s" podCreationTimestamp="2026-03-07 08:09:53 +0000 UTC" firstStartedPulling="2026-03-07 08:09:54.260923905 +0000 UTC m=+1251.170090380" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:41.184605632 +0000 UTC m=+1298.093772127" watchObservedRunningTime="2026-03-07 08:10:41.304287944 +0000 UTC m=+1298.213454419" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.369340 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lmkd6"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.398798 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4l5m5"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.403934 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.408478 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4l5m5"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.408591 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.479884 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-ee06-account-create-update-s6d4f"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.488318 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.488406 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.488445 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-config\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.488502 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9phzf\" (UniqueName: \"kubernetes.io/projected/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-kube-api-access-9phzf\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.529335 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-p5vt2"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.533461 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.538669 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.554437 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-p5vt2"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.589833 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.589894 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.589926 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-config\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.589992 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9phzf\" (UniqueName: \"kubernetes.io/projected/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-kube-api-access-9phzf\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.598368 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.608221 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.609055 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-config\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.610637 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4l5m5"] Mar 07 08:10:41 crc kubenswrapper[4761]: E0307 08:10:41.611411 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-9phzf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" podUID="1e3b3bdb-2ca8-4b68-951f-3d271adc27ab" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.628349 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.630270 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.635488 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-mdwj2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.635794 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.635937 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.636151 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.642040 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-5r7cq"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.644292 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.646060 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.670530 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9phzf\" (UniqueName: \"kubernetes.io/projected/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-kube-api-access-9phzf\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.678815 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.691779 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vshwm\" (UniqueName: \"kubernetes.io/projected/f12e8753-c20a-460e-a4a6-a69f604df651-kube-api-access-vshwm\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.691842 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24f8k\" (UniqueName: \"kubernetes.io/projected/a6c2f90d-fff9-4f86-b1c4-432d76275714-kube-api-access-24f8k\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.691876 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c2f90d-fff9-4f86-b1c4-432d76275714-config\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.691930 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a6c2f90d-fff9-4f86-b1c4-432d76275714-ovn-rundir\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692026 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f12e8753-c20a-460e-a4a6-a69f604df651-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692085 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692144 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a6c2f90d-fff9-4f86-b1c4-432d76275714-ovs-rundir\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692174 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692238 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f12e8753-c20a-460e-a4a6-a69f604df651-scripts\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692255 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692291 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12e8753-c20a-460e-a4a6-a69f604df651-config\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692315 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c2f90d-fff9-4f86-b1c4-432d76275714-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692346 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c2f90d-fff9-4f86-b1c4-432d76275714-combined-ca-bundle\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.703889 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-b9fmh"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.736051 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5r7cq"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794197 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-dns-svc\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794233 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794261 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a6c2f90d-fff9-4f86-b1c4-432d76275714-ovn-rundir\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794298 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-config\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794386 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794412 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f12e8753-c20a-460e-a4a6-a69f604df651-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794432 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794787 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrfs7\" (UniqueName: \"kubernetes.io/projected/067b5424-8f75-4bb9-ab09-588e4e306a28-kube-api-access-qrfs7\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794814 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a6c2f90d-fff9-4f86-b1c4-432d76275714-ovs-rundir\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794844 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794862 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f12e8753-c20a-460e-a4a6-a69f604df651-scripts\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794877 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794897 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12e8753-c20a-460e-a4a6-a69f604df651-config\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794916 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c2f90d-fff9-4f86-b1c4-432d76275714-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794944 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c2f90d-fff9-4f86-b1c4-432d76275714-combined-ca-bundle\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794984 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vshwm\" (UniqueName: \"kubernetes.io/projected/f12e8753-c20a-460e-a4a6-a69f604df651-kube-api-access-vshwm\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.795012 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24f8k\" (UniqueName: \"kubernetes.io/projected/a6c2f90d-fff9-4f86-b1c4-432d76275714-kube-api-access-24f8k\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.795041 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c2f90d-fff9-4f86-b1c4-432d76275714-config\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.796083 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c2f90d-fff9-4f86-b1c4-432d76275714-config\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.796254 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a6c2f90d-fff9-4f86-b1c4-432d76275714-ovs-rundir\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.796250 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a6c2f90d-fff9-4f86-b1c4-432d76275714-ovn-rundir\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.798921 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f12e8753-c20a-460e-a4a6-a69f604df651-scripts\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.799317 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12e8753-c20a-460e-a4a6-a69f604df651-config\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.800493 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f12e8753-c20a-460e-a4a6-a69f604df651-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.811368 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c2f90d-fff9-4f86-b1c4-432d76275714-combined-ca-bundle\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.811520 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.815730 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.818477 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24f8k\" (UniqueName: \"kubernetes.io/projected/a6c2f90d-fff9-4f86-b1c4-432d76275714-kube-api-access-24f8k\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.819319 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.820322 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c2f90d-fff9-4f86-b1c4-432d76275714-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.827643 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vshwm\" (UniqueName: \"kubernetes.io/projected/f12e8753-c20a-460e-a4a6-a69f604df651-kube-api-access-vshwm\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.849571 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.854943 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.896833 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.896931 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrfs7\" (UniqueName: \"kubernetes.io/projected/067b5424-8f75-4bb9-ab09-588e4e306a28-kube-api-access-qrfs7\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.897226 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-dns-svc\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.897248 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.897276 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-config\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.897797 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.899178 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-dns-svc\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.899387 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-config\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.899443 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.920508 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrfs7\" (UniqueName: \"kubernetes.io/projected/067b5424-8f75-4bb9-ab09-588e4e306a28-kube-api-access-qrfs7\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.956273 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.029932 4761 generic.go:334] "Generic (PLEG): container finished" podID="ab06ca00-a8f7-40a5-a332-b00fc1b4de8b" containerID="92d60cbd1931c0910d9a77a2b32fd62f51cb82efcb041fb0c916607c3418054a" exitCode=0 Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.030162 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b43-account-create-update-jpq6b" event={"ID":"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b","Type":"ContainerDied","Data":"92d60cbd1931c0910d9a77a2b32fd62f51cb82efcb041fb0c916607c3418054a"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.044954 4761 generic.go:334] "Generic (PLEG): container finished" podID="7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0" containerID="604a9a3091641041b296f96b4f1d808f47de7c313c82fc9218ada4352b3da08b" exitCode=0 Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.045456 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cv77d" event={"ID":"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0","Type":"ContainerDied","Data":"604a9a3091641041b296f96b4f1d808f47de7c313c82fc9218ada4352b3da08b"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.066790 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" event={"ID":"dc4048ba-7b5a-48ab-b609-21cc5598d56c","Type":"ContainerStarted","Data":"73b7f5c1ad87980cb468fd2ea0a74afe21853e4cb686a20061f8344d21dbba9b"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.066838 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" event={"ID":"dc4048ba-7b5a-48ab-b609-21cc5598d56c","Type":"ContainerStarted","Data":"c8cb24daca88f7a8bea74d9525b8af2ceb9418b38530dfe91f56c4d7f70f5cc4"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.069498 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-557c-account-create-update-jtvjg" event={"ID":"b12971f6-3d67-4225-beab-46d9d3505ae1","Type":"ContainerDied","Data":"7e5ba0bde8469cf1aa8078524709ff2366dd55b3bda6ff4b838052755b33fd24"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.069225 4761 generic.go:334] "Generic (PLEG): container finished" podID="b12971f6-3d67-4225-beab-46d9d3505ae1" containerID="7e5ba0bde8469cf1aa8078524709ff2366dd55b3bda6ff4b838052755b33fd24" exitCode=0 Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.083589 4761 generic.go:334] "Generic (PLEG): container finished" podID="9ecdc2ad-5812-4bb2-a6ea-8659b3993985" containerID="b269857bff81c96eb8751012ebe23820ca1ecd1ca87d5b49700f96f3184ec666" exitCode=0 Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.083909 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-458dc" event={"ID":"9ecdc2ad-5812-4bb2-a6ea-8659b3993985","Type":"ContainerDied","Data":"b269857bff81c96eb8751012ebe23820ca1ecd1ca87d5b49700f96f3184ec666"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.091557 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" event={"ID":"70c13d8a-a25a-419e-9267-6894a86897cc","Type":"ContainerStarted","Data":"9521ac8897cc5031589e3a97f98d6810344dcf9dfb4311f397e7415d00277014"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.091647 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" event={"ID":"70c13d8a-a25a-419e-9267-6894a86897cc","Type":"ContainerStarted","Data":"d428f517ed41c6bc0fc513cfdc8ba6b1bddd3a46ac1511d84e371bec92db552b"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.092173 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.106439 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.109335 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.121895 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" podStartSLOduration=2.121870324 podStartE2EDuration="2.121870324s" podCreationTimestamp="2026-03-07 08:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:42.114480696 +0000 UTC m=+1299.023647171" watchObservedRunningTime="2026-03-07 08:10:42.121870324 +0000 UTC m=+1299.031036799" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.152115 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.162687 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" podStartSLOduration=2.162671651 podStartE2EDuration="2.162671651s" podCreationTimestamp="2026-03-07 08:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:42.12919209 +0000 UTC m=+1299.038358575" watchObservedRunningTime="2026-03-07 08:10:42.162671651 +0000 UTC m=+1299.071838126" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.260031 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9phzf\" (UniqueName: \"kubernetes.io/projected/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-kube-api-access-9phzf\") pod \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.260283 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-ovsdbserver-nb\") pod \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.260388 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-dns-svc\") pod \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.260414 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-config\") pod \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.261577 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-config" (OuterVolumeSpecName: "config") pod "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab" (UID: "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.261569 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab" (UID: "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.261628 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab" (UID: "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.262841 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.262863 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.262874 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.282262 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-kube-api-access-9phzf" (OuterVolumeSpecName: "kube-api-access-9phzf") pod "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab" (UID: "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab"). InnerVolumeSpecName "kube-api-access-9phzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.365924 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9phzf\" (UniqueName: \"kubernetes.io/projected/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-kube-api-access-9phzf\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.488004 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-p5vt2"] Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.647020 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.776987 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp6zn\" (UniqueName: \"kubernetes.io/projected/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-kube-api-access-lp6zn\") pod \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.777113 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-operator-scripts\") pod \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.778180 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab06ca00-a8f7-40a5-a332-b00fc1b4de8b" (UID: "ab06ca00-a8f7-40a5-a332-b00fc1b4de8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.780829 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-kube-api-access-lp6zn" (OuterVolumeSpecName: "kube-api-access-lp6zn") pod "ab06ca00-a8f7-40a5-a332-b00fc1b4de8b" (UID: "ab06ca00-a8f7-40a5-a332-b00fc1b4de8b"). InnerVolumeSpecName "kube-api-access-lp6zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.860020 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.873083 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5r7cq"] Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.879920 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp6zn\" (UniqueName: \"kubernetes.io/projected/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-kube-api-access-lp6zn\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.879955 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.102409 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f12e8753-c20a-460e-a4a6-a69f604df651","Type":"ContainerStarted","Data":"bf01097cb387b15932ce4af784c5272a67dc8b30c530375ca1f9934c2c88a567"} Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.103816 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b43-account-create-update-jpq6b" event={"ID":"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b","Type":"ContainerDied","Data":"4c5f15f84b273e19bc69f36a776a438b641c5b3624b97eff112dd2756d9b5eee"} Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.103840 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c5f15f84b273e19bc69f36a776a438b641c5b3624b97eff112dd2756d9b5eee" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.103889 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.109969 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5r7cq" event={"ID":"067b5424-8f75-4bb9-ab09-588e4e306a28","Type":"ContainerStarted","Data":"d528b0ee5cdcc3c74b6be0125ba8b9050c5885a6808688d7b153ceddf46e1503"} Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.111739 4761 generic.go:334] "Generic (PLEG): container finished" podID="dc4048ba-7b5a-48ab-b609-21cc5598d56c" containerID="73b7f5c1ad87980cb468fd2ea0a74afe21853e4cb686a20061f8344d21dbba9b" exitCode=0 Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.111793 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" event={"ID":"dc4048ba-7b5a-48ab-b609-21cc5598d56c","Type":"ContainerDied","Data":"73b7f5c1ad87980cb468fd2ea0a74afe21853e4cb686a20061f8344d21dbba9b"} Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.115222 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p5vt2" event={"ID":"a6c2f90d-fff9-4f86-b1c4-432d76275714","Type":"ContainerStarted","Data":"60918582cad757a4567bcc01d8771b24b6fe1183017a99f8c1d5b50abbb80a14"} Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.120958 4761 generic.go:334] "Generic (PLEG): container finished" podID="70c13d8a-a25a-419e-9267-6894a86897cc" containerID="9521ac8897cc5031589e3a97f98d6810344dcf9dfb4311f397e7415d00277014" exitCode=0 Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.121015 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" event={"ID":"70c13d8a-a25a-419e-9267-6894a86897cc","Type":"ContainerDied","Data":"9521ac8897cc5031589e3a97f98d6810344dcf9dfb4311f397e7415d00277014"} Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.121176 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.122439 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerName="dnsmasq-dns" containerID="cri-o://c49e7047447bccbddb275f76f211640b1c7d8ba235ea330a2b7265c257f39e83" gracePeriod=10 Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.363197 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4l5m5"] Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.373134 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4l5m5"] Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.744008 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.745527 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e3b3bdb-2ca8-4b68-951f-3d271adc27ab" path="/var/lib/kubelet/pods/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab/volumes" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.770237 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.770297 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.807087 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-operator-scripts\") pod \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.812054 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0" (UID: "7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.816730 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4htf\" (UniqueName: \"kubernetes.io/projected/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-kube-api-access-z4htf\") pod \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.823520 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.839903 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-kube-api-access-z4htf" (OuterVolumeSpecName: "kube-api-access-z4htf") pod "7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0" (UID: "7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0"). InnerVolumeSpecName "kube-api-access-z4htf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:43 crc kubenswrapper[4761]: E0307 08:10:43.843207 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7fdacd4_7f0b_4b48_bae3_7d9cfebb1d4f.slice/crio-conmon-c49e7047447bccbddb275f76f211640b1c7d8ba235ea330a2b7265c257f39e83.scope\": RecentStats: unable to find data in memory cache]" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.895100 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-458dc" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.911393 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.925322 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzx9f\" (UniqueName: \"kubernetes.io/projected/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-kube-api-access-wzx9f\") pod \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.925397 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-operator-scripts\") pod \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.926013 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4htf\" (UniqueName: \"kubernetes.io/projected/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-kube-api-access-z4htf\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.926626 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ecdc2ad-5812-4bb2-a6ea-8659b3993985" (UID: "9ecdc2ad-5812-4bb2-a6ea-8659b3993985"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.932358 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-kube-api-access-wzx9f" (OuterVolumeSpecName: "kube-api-access-wzx9f") pod "9ecdc2ad-5812-4bb2-a6ea-8659b3993985" (UID: "9ecdc2ad-5812-4bb2-a6ea-8659b3993985"). InnerVolumeSpecName "kube-api-access-wzx9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.032346 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh4ql\" (UniqueName: \"kubernetes.io/projected/b12971f6-3d67-4225-beab-46d9d3505ae1-kube-api-access-qh4ql\") pod \"b12971f6-3d67-4225-beab-46d9d3505ae1\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.032772 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12971f6-3d67-4225-beab-46d9d3505ae1-operator-scripts\") pod \"b12971f6-3d67-4225-beab-46d9d3505ae1\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.035735 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.035787 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzx9f\" (UniqueName: \"kubernetes.io/projected/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-kube-api-access-wzx9f\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.035756 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b12971f6-3d67-4225-beab-46d9d3505ae1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b12971f6-3d67-4225-beab-46d9d3505ae1" (UID: "b12971f6-3d67-4225-beab-46d9d3505ae1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.038664 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b12971f6-3d67-4225-beab-46d9d3505ae1-kube-api-access-qh4ql" (OuterVolumeSpecName: "kube-api-access-qh4ql") pod "b12971f6-3d67-4225-beab-46d9d3505ae1" (UID: "b12971f6-3d67-4225-beab-46d9d3505ae1"). InnerVolumeSpecName "kube-api-access-qh4ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.137783 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh4ql\" (UniqueName: \"kubernetes.io/projected/b12971f6-3d67-4225-beab-46d9d3505ae1-kube-api-access-qh4ql\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.137814 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12971f6-3d67-4225-beab-46d9d3505ae1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.140817 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-557c-account-create-update-jtvjg" event={"ID":"b12971f6-3d67-4225-beab-46d9d3505ae1","Type":"ContainerDied","Data":"6ffd98308f17c6b4f6ef32d29b9155db09cf76d98289c555382272f0bba491af"} Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.140841 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.140857 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ffd98308f17c6b4f6ef32d29b9155db09cf76d98289c555382272f0bba491af" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.143319 4761 generic.go:334] "Generic (PLEG): container finished" podID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerID="c49e7047447bccbddb275f76f211640b1c7d8ba235ea330a2b7265c257f39e83" exitCode=0 Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.143472 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" event={"ID":"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f","Type":"ContainerDied","Data":"c49e7047447bccbddb275f76f211640b1c7d8ba235ea330a2b7265c257f39e83"} Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.145540 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-458dc" event={"ID":"9ecdc2ad-5812-4bb2-a6ea-8659b3993985","Type":"ContainerDied","Data":"a6621923d144c8da576144af2ecf2d1384b1f0e1d56bcb356e01b483f42e2298"} Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.145567 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6621923d144c8da576144af2ecf2d1384b1f0e1d56bcb356e01b483f42e2298" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.145641 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-458dc" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.152429 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cv77d" event={"ID":"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0","Type":"ContainerDied","Data":"6de9f64903271fa613ae17739e46bf372fec41fb2cf9a66024b05a7e8afa252f"} Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.152474 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6de9f64903271fa613ae17739e46bf372fec41fb2cf9a66024b05a7e8afa252f" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.152561 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.170853 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p5vt2" event={"ID":"a6c2f90d-fff9-4f86-b1c4-432d76275714","Type":"ContainerStarted","Data":"c0bb65fb8bafe3b70f53a19adfe587dd01ae29b1610541345c1d656edade81e1"} Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.669382 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dnbwr"] Mar 07 08:10:44 crc kubenswrapper[4761]: E0307 08:10:44.669931 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0" containerName="mariadb-database-create" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.669952 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0" containerName="mariadb-database-create" Mar 07 08:10:44 crc kubenswrapper[4761]: E0307 08:10:44.669963 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab06ca00-a8f7-40a5-a332-b00fc1b4de8b" containerName="mariadb-account-create-update" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.669971 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab06ca00-a8f7-40a5-a332-b00fc1b4de8b" containerName="mariadb-account-create-update" Mar 07 08:10:44 crc kubenswrapper[4761]: E0307 08:10:44.669997 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecdc2ad-5812-4bb2-a6ea-8659b3993985" containerName="mariadb-database-create" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.670006 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecdc2ad-5812-4bb2-a6ea-8659b3993985" containerName="mariadb-database-create" Mar 07 08:10:44 crc kubenswrapper[4761]: E0307 08:10:44.670021 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b12971f6-3d67-4225-beab-46d9d3505ae1" containerName="mariadb-account-create-update" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.670030 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b12971f6-3d67-4225-beab-46d9d3505ae1" containerName="mariadb-account-create-update" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.670266 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b12971f6-3d67-4225-beab-46d9d3505ae1" containerName="mariadb-account-create-update" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.670284 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab06ca00-a8f7-40a5-a332-b00fc1b4de8b" containerName="mariadb-account-create-update" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.670297 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0" containerName="mariadb-database-create" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.670310 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ecdc2ad-5812-4bb2-a6ea-8659b3993985" containerName="mariadb-database-create" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.671181 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.672932 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.680041 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dnbwr"] Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.694954 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.698431 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.771797 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wphbs\" (UniqueName: \"kubernetes.io/projected/dc4048ba-7b5a-48ab-b609-21cc5598d56c-kube-api-access-wphbs\") pod \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.772321 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c13d8a-a25a-419e-9267-6894a86897cc-operator-scripts\") pod \"70c13d8a-a25a-419e-9267-6894a86897cc\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.772539 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc4048ba-7b5a-48ab-b609-21cc5598d56c-operator-scripts\") pod \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.772693 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c13d8a-a25a-419e-9267-6894a86897cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70c13d8a-a25a-419e-9267-6894a86897cc" (UID: "70c13d8a-a25a-419e-9267-6894a86897cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.772759 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdbmq\" (UniqueName: \"kubernetes.io/projected/70c13d8a-a25a-419e-9267-6894a86897cc-kube-api-access-sdbmq\") pod \"70c13d8a-a25a-419e-9267-6894a86897cc\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.773067 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-operator-scripts\") pod \"root-account-create-update-dnbwr\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.773104 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkvf4\" (UniqueName: \"kubernetes.io/projected/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-kube-api-access-pkvf4\") pod \"root-account-create-update-dnbwr\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.773121 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc4048ba-7b5a-48ab-b609-21cc5598d56c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc4048ba-7b5a-48ab-b609-21cc5598d56c" (UID: "dc4048ba-7b5a-48ab-b609-21cc5598d56c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.773895 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc4048ba-7b5a-48ab-b609-21cc5598d56c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.773926 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c13d8a-a25a-419e-9267-6894a86897cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.782626 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c13d8a-a25a-419e-9267-6894a86897cc-kube-api-access-sdbmq" (OuterVolumeSpecName: "kube-api-access-sdbmq") pod "70c13d8a-a25a-419e-9267-6894a86897cc" (UID: "70c13d8a-a25a-419e-9267-6894a86897cc"). InnerVolumeSpecName "kube-api-access-sdbmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.785648 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc4048ba-7b5a-48ab-b609-21cc5598d56c-kube-api-access-wphbs" (OuterVolumeSpecName: "kube-api-access-wphbs") pod "dc4048ba-7b5a-48ab-b609-21cc5598d56c" (UID: "dc4048ba-7b5a-48ab-b609-21cc5598d56c"). InnerVolumeSpecName "kube-api-access-wphbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.875261 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-operator-scripts\") pod \"root-account-create-update-dnbwr\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.875333 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkvf4\" (UniqueName: \"kubernetes.io/projected/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-kube-api-access-pkvf4\") pod \"root-account-create-update-dnbwr\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.875506 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wphbs\" (UniqueName: \"kubernetes.io/projected/dc4048ba-7b5a-48ab-b609-21cc5598d56c-kube-api-access-wphbs\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.875520 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdbmq\" (UniqueName: \"kubernetes.io/projected/70c13d8a-a25a-419e-9267-6894a86897cc-kube-api-access-sdbmq\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.877007 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-operator-scripts\") pod \"root-account-create-update-dnbwr\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.894379 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkvf4\" (UniqueName: \"kubernetes.io/projected/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-kube-api-access-pkvf4\") pod \"root-account-create-update-dnbwr\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.009785 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.186360 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" event={"ID":"70c13d8a-a25a-419e-9267-6894a86897cc","Type":"ContainerDied","Data":"d428f517ed41c6bc0fc513cfdc8ba6b1bddd3a46ac1511d84e371bec92db552b"} Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.186409 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d428f517ed41c6bc0fc513cfdc8ba6b1bddd3a46ac1511d84e371bec92db552b" Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.186435 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.189189 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" event={"ID":"dc4048ba-7b5a-48ab-b609-21cc5598d56c","Type":"ContainerDied","Data":"c8cb24daca88f7a8bea74d9525b8af2ceb9418b38530dfe91f56c4d7f70f5cc4"} Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.189228 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8cb24daca88f7a8bea74d9525b8af2ceb9418b38530dfe91f56c4d7f70f5cc4" Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.189277 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.492104 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dnbwr"] Mar 07 08:10:46 crc kubenswrapper[4761]: I0307 08:10:46.223912 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5r7cq" event={"ID":"067b5424-8f75-4bb9-ab09-588e4e306a28","Type":"ContainerStarted","Data":"c498f4c379cf8807574cdc10a758374ae889e538fe1b9f03b94de8aa56f32a78"} Mar 07 08:10:46 crc kubenswrapper[4761]: I0307 08:10:46.231663 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dnbwr" event={"ID":"d9b1a5f6-106b-4c9e-a847-133b75cfaa94","Type":"ContainerStarted","Data":"e63c59e066e9d2a49094f56fbfdf8178914c396f3f7b111720ca5f38b6156c6b"} Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.243144 4761 generic.go:334] "Generic (PLEG): container finished" podID="d9b1a5f6-106b-4c9e-a847-133b75cfaa94" containerID="bf13a0b3293e1a4646cfabacb3571d4c17dab1d592a83ac1045bde8ab5526426" exitCode=0 Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.243319 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dnbwr" event={"ID":"d9b1a5f6-106b-4c9e-a847-133b75cfaa94","Type":"ContainerDied","Data":"bf13a0b3293e1a4646cfabacb3571d4c17dab1d592a83ac1045bde8ab5526426"} Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.245052 4761 generic.go:334] "Generic (PLEG): container finished" podID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerID="c498f4c379cf8807574cdc10a758374ae889e538fe1b9f03b94de8aa56f32a78" exitCode=0 Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.245444 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5r7cq" event={"ID":"067b5424-8f75-4bb9-ab09-588e4e306a28","Type":"ContainerDied","Data":"c498f4c379cf8807574cdc10a758374ae889e538fe1b9f03b94de8aa56f32a78"} Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.290550 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-p5vt2" podStartSLOduration=6.290522824 podStartE2EDuration="6.290522824s" podCreationTimestamp="2026-03-07 08:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:47.286392959 +0000 UTC m=+1304.195559434" watchObservedRunningTime="2026-03-07 08:10:47.290522824 +0000 UTC m=+1304.199689299" Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.366086 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:47 crc kubenswrapper[4761]: E0307 08:10:47.366529 4761 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 08:10:47 crc kubenswrapper[4761]: E0307 08:10:47.366547 4761 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 08:10:47 crc kubenswrapper[4761]: E0307 08:10:47.366594 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift podName:c5a46683-9d54-4f8e-909c-e7c5d3e0698f nodeName:}" failed. No retries permitted until 2026-03-07 08:11:03.366574037 +0000 UTC m=+1320.275740512 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift") pod "swift-storage-0" (UID: "c5a46683-9d54-4f8e-909c-e7c5d3e0698f") : configmap "swift-ring-files" not found Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.981307 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-tz9rv"] Mar 07 08:10:47 crc kubenswrapper[4761]: E0307 08:10:47.981857 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c13d8a-a25a-419e-9267-6894a86897cc" containerName="mariadb-account-create-update" Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.981881 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c13d8a-a25a-419e-9267-6894a86897cc" containerName="mariadb-account-create-update" Mar 07 08:10:47 crc kubenswrapper[4761]: E0307 08:10:47.981909 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4048ba-7b5a-48ab-b609-21cc5598d56c" containerName="mariadb-database-create" Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.981917 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4048ba-7b5a-48ab-b609-21cc5598d56c" containerName="mariadb-database-create" Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.982162 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc4048ba-7b5a-48ab-b609-21cc5598d56c" containerName="mariadb-database-create" Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.982193 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c13d8a-a25a-419e-9267-6894a86897cc" containerName="mariadb-account-create-update" Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.983533 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.989913 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tz9rv"] Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.082180 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7jv7\" (UniqueName: \"kubernetes.io/projected/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-kube-api-access-p7jv7\") pod \"glance-db-create-tz9rv\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.082533 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-operator-scripts\") pod \"glance-db-create-tz9rv\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.091008 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a970-account-create-update-pkxzm"] Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.092454 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.100574 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a970-account-create-update-pkxzm"] Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.110386 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.187030 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7jv7\" (UniqueName: \"kubernetes.io/projected/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-kube-api-access-p7jv7\") pod \"glance-db-create-tz9rv\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.187135 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaf98b6-b097-4cbe-9815-835cd72b2616-operator-scripts\") pod \"glance-a970-account-create-update-pkxzm\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.187184 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-operator-scripts\") pod \"glance-db-create-tz9rv\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.187302 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-469r2\" (UniqueName: \"kubernetes.io/projected/9eaf98b6-b097-4cbe-9815-835cd72b2616-kube-api-access-469r2\") pod \"glance-a970-account-create-update-pkxzm\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.188024 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-operator-scripts\") pod \"glance-db-create-tz9rv\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.205544 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7jv7\" (UniqueName: \"kubernetes.io/projected/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-kube-api-access-p7jv7\") pod \"glance-db-create-tz9rv\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.288939 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaf98b6-b097-4cbe-9815-835cd72b2616-operator-scripts\") pod \"glance-a970-account-create-update-pkxzm\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.291102 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-469r2\" (UniqueName: \"kubernetes.io/projected/9eaf98b6-b097-4cbe-9815-835cd72b2616-kube-api-access-469r2\") pod \"glance-a970-account-create-update-pkxzm\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.289949 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaf98b6-b097-4cbe-9815-835cd72b2616-operator-scripts\") pod \"glance-a970-account-create-update-pkxzm\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.308624 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-469r2\" (UniqueName: \"kubernetes.io/projected/9eaf98b6-b097-4cbe-9815-835cd72b2616-kube-api-access-469r2\") pod \"glance-a970-account-create-update-pkxzm\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.311708 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.426001 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.432045 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.439618 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.521859 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkvf4\" (UniqueName: \"kubernetes.io/projected/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-kube-api-access-pkvf4\") pod \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.521940 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-config\") pod \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.521973 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-operator-scripts\") pod \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.522025 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2dlj\" (UniqueName: \"kubernetes.io/projected/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-kube-api-access-c2dlj\") pod \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.522143 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-dns-svc\") pod \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.523587 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d9b1a5f6-106b-4c9e-a847-133b75cfaa94" (UID: "d9b1a5f6-106b-4c9e-a847-133b75cfaa94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.529024 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-kube-api-access-c2dlj" (OuterVolumeSpecName: "kube-api-access-c2dlj") pod "c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" (UID: "c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f"). InnerVolumeSpecName "kube-api-access-c2dlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.530933 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-kube-api-access-pkvf4" (OuterVolumeSpecName: "kube-api-access-pkvf4") pod "d9b1a5f6-106b-4c9e-a847-133b75cfaa94" (UID: "d9b1a5f6-106b-4c9e-a847-133b75cfaa94"). InnerVolumeSpecName "kube-api-access-pkvf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.605332 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-config" (OuterVolumeSpecName: "config") pod "c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" (UID: "c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.614608 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" (UID: "c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.625776 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2dlj\" (UniqueName: \"kubernetes.io/projected/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-kube-api-access-c2dlj\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.625803 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.625813 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkvf4\" (UniqueName: \"kubernetes.io/projected/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-kube-api-access-pkvf4\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.625821 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.625830 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.271772 4761 generic.go:334] "Generic (PLEG): container finished" podID="34132cc8-6037-4a17-9a58-5736caf6130b" containerID="eb24dde25e9feceb32b1e0885d44501fcd066a8b2d11595c82eb5c68daa220aa" exitCode=0 Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.271929 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jqk77" event={"ID":"34132cc8-6037-4a17-9a58-5736caf6130b","Type":"ContainerDied","Data":"eb24dde25e9feceb32b1e0885d44501fcd066a8b2d11595c82eb5c68daa220aa"} Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.276144 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dnbwr" event={"ID":"d9b1a5f6-106b-4c9e-a847-133b75cfaa94","Type":"ContainerDied","Data":"e63c59e066e9d2a49094f56fbfdf8178914c396f3f7b111720ca5f38b6156c6b"} Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.276175 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e63c59e066e9d2a49094f56fbfdf8178914c396f3f7b111720ca5f38b6156c6b" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.276228 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.278582 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f12e8753-c20a-460e-a4a6-a69f604df651","Type":"ContainerStarted","Data":"3f878102434eb65059967079bc79fdaecf49db95336883897346f660adcde28d"} Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.278611 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f12e8753-c20a-460e-a4a6-a69f604df651","Type":"ContainerStarted","Data":"a6c018276622685d65c955890ef1c27d6a7d0b8de10a5829191a12c72ca26416"} Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.278771 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.282560 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5r7cq" event={"ID":"067b5424-8f75-4bb9-ab09-588e4e306a28","Type":"ContainerStarted","Data":"a3c3a2734ca2fdfedd4aa0341d15fa1948d68cb529469c8595db30900e537e2e"} Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.282702 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.284649 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerStarted","Data":"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee"} Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.286411 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" event={"ID":"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f","Type":"ContainerDied","Data":"b5fbbc13dbf55e476e8f5fa6e8f3f629fc09303ea70bc234446b8082ea16b4f0"} Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.286446 4761 scope.go:117] "RemoveContainer" containerID="c49e7047447bccbddb275f76f211640b1c7d8ba235ea330a2b7265c257f39e83" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.286456 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.304620 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a970-account-create-update-pkxzm"] Mar 07 08:10:50 crc kubenswrapper[4761]: W0307 08:10:50.311776 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eaf98b6_b097_4cbe_9815_835cd72b2616.slice/crio-1c44d24784465efa79095903693091246b0b4a9f57882f3598402b604a9039a4 WatchSource:0}: Error finding container 1c44d24784465efa79095903693091246b0b4a9f57882f3598402b604a9039a4: Status 404 returned error can't find the container with id 1c44d24784465efa79095903693091246b0b4a9f57882f3598402b604a9039a4 Mar 07 08:10:50 crc kubenswrapper[4761]: W0307 08:10:50.315485 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff84c7f3_11ea_4917_ae31_5abc2a9d9f7c.slice/crio-300ea83ed49ea0e24d20bf09818ef709c42e5d9ddfbd1777ad359d505a0bd39a WatchSource:0}: Error finding container 300ea83ed49ea0e24d20bf09818ef709c42e5d9ddfbd1777ad359d505a0bd39a: Status 404 returned error can't find the container with id 300ea83ed49ea0e24d20bf09818ef709c42e5d9ddfbd1777ad359d505a0bd39a Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.332561 4761 scope.go:117] "RemoveContainer" containerID="a269d72aae7f21a36693603be9bf3e2bdc5f0a95b59c92edc9bb043030d3a13b" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.334865 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tz9rv"] Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.345126 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.416680875 podStartE2EDuration="9.345108421s" podCreationTimestamp="2026-03-07 08:10:41 +0000 UTC" firstStartedPulling="2026-03-07 08:10:42.876506775 +0000 UTC m=+1299.785673250" lastFinishedPulling="2026-03-07 08:10:49.804934331 +0000 UTC m=+1306.714100796" observedRunningTime="2026-03-07 08:10:50.317279793 +0000 UTC m=+1307.226446268" watchObservedRunningTime="2026-03-07 08:10:50.345108421 +0000 UTC m=+1307.254274896" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.361368 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lmkd6"] Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.376147 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lmkd6"] Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.383064 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-5r7cq" podStartSLOduration=9.383044695 podStartE2EDuration="9.383044695s" podCreationTimestamp="2026-03-07 08:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:50.354274054 +0000 UTC m=+1307.263440529" watchObservedRunningTime="2026-03-07 08:10:50.383044695 +0000 UTC m=+1307.292211170" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.602690 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z"] Mar 07 08:10:50 crc kubenswrapper[4761]: E0307 08:10:50.603046 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerName="dnsmasq-dns" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.603058 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerName="dnsmasq-dns" Mar 07 08:10:50 crc kubenswrapper[4761]: E0307 08:10:50.603070 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerName="init" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.603076 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerName="init" Mar 07 08:10:50 crc kubenswrapper[4761]: E0307 08:10:50.603085 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9b1a5f6-106b-4c9e-a847-133b75cfaa94" containerName="mariadb-account-create-update" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.603091 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9b1a5f6-106b-4c9e-a847-133b75cfaa94" containerName="mariadb-account-create-update" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.603282 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerName="dnsmasq-dns" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.603302 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9b1a5f6-106b-4c9e-a847-133b75cfaa94" containerName="mariadb-account-create-update" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.603915 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.633334 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z"] Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.760021 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1946466-f406-4073-96f8-cc6e66148293-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-rxl5z\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.760075 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc4sg\" (UniqueName: \"kubernetes.io/projected/c1946466-f406-4073-96f8-cc6e66148293-kube-api-access-rc4sg\") pod \"mysqld-exporter-openstack-cell1-db-create-rxl5z\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.815708 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-49ec-account-create-update-257w6"] Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.817250 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.819364 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.833130 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-49ec-account-create-update-257w6"] Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.862258 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1946466-f406-4073-96f8-cc6e66148293-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-rxl5z\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.862305 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc4sg\" (UniqueName: \"kubernetes.io/projected/c1946466-f406-4073-96f8-cc6e66148293-kube-api-access-rc4sg\") pod \"mysqld-exporter-openstack-cell1-db-create-rxl5z\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.863815 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1946466-f406-4073-96f8-cc6e66148293-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-rxl5z\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.900362 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc4sg\" (UniqueName: \"kubernetes.io/projected/c1946466-f406-4073-96f8-cc6e66148293-kube-api-access-rc4sg\") pod \"mysqld-exporter-openstack-cell1-db-create-rxl5z\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.938275 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.965770 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/042bb2b8-9493-439c-85e3-bb2766db2135-operator-scripts\") pod \"mysqld-exporter-49ec-account-create-update-257w6\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.966160 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbhg\" (UniqueName: \"kubernetes.io/projected/042bb2b8-9493-439c-85e3-bb2766db2135-kube-api-access-brbhg\") pod \"mysqld-exporter-49ec-account-create-update-257w6\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.068417 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/042bb2b8-9493-439c-85e3-bb2766db2135-operator-scripts\") pod \"mysqld-exporter-49ec-account-create-update-257w6\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.068684 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brbhg\" (UniqueName: \"kubernetes.io/projected/042bb2b8-9493-439c-85e3-bb2766db2135-kube-api-access-brbhg\") pod \"mysqld-exporter-49ec-account-create-update-257w6\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.069645 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/042bb2b8-9493-439c-85e3-bb2766db2135-operator-scripts\") pod \"mysqld-exporter-49ec-account-create-update-257w6\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.093100 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brbhg\" (UniqueName: \"kubernetes.io/projected/042bb2b8-9493-439c-85e3-bb2766db2135-kube-api-access-brbhg\") pod \"mysqld-exporter-49ec-account-create-update-257w6\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.105995 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dnbwr"] Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.114208 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dnbwr"] Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.139673 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.296454 4761 generic.go:334] "Generic (PLEG): container finished" podID="ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c" containerID="d0bdd15b2bc88eba2d15ee11a9ddb62245bc8b40ec0c2d87cdc1ae8f6b410cf0" exitCode=0 Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.296730 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tz9rv" event={"ID":"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c","Type":"ContainerDied","Data":"d0bdd15b2bc88eba2d15ee11a9ddb62245bc8b40ec0c2d87cdc1ae8f6b410cf0"} Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.296755 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tz9rv" event={"ID":"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c","Type":"ContainerStarted","Data":"300ea83ed49ea0e24d20bf09818ef709c42e5d9ddfbd1777ad359d505a0bd39a"} Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.304750 4761 generic.go:334] "Generic (PLEG): container finished" podID="9eaf98b6-b097-4cbe-9815-835cd72b2616" containerID="b97d97e3a4c4f2472de35f79e1c8d14798b00f3965cb5ecc889970e8b120eb9c" exitCode=0 Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.304874 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a970-account-create-update-pkxzm" event={"ID":"9eaf98b6-b097-4cbe-9815-835cd72b2616","Type":"ContainerDied","Data":"b97d97e3a4c4f2472de35f79e1c8d14798b00f3965cb5ecc889970e8b120eb9c"} Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.304919 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a970-account-create-update-pkxzm" event={"ID":"9eaf98b6-b097-4cbe-9815-835cd72b2616","Type":"ContainerStarted","Data":"1c44d24784465efa79095903693091246b0b4a9f57882f3598402b604a9039a4"} Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.547059 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z"] Mar 07 08:10:51 crc kubenswrapper[4761]: W0307 08:10:51.562874 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1946466_f406_4073_96f8_cc6e66148293.slice/crio-6ee4580af6bc65b5bd76d05f3934867ec935139e99a669fe6a2cce9cc8e30ab3 WatchSource:0}: Error finding container 6ee4580af6bc65b5bd76d05f3934867ec935139e99a669fe6a2cce9cc8e30ab3: Status 404 returned error can't find the container with id 6ee4580af6bc65b5bd76d05f3934867ec935139e99a669fe6a2cce9cc8e30ab3 Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.719949 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" path="/var/lib/kubelet/pods/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f/volumes" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.720950 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9b1a5f6-106b-4c9e-a847-133b75cfaa94" path="/var/lib/kubelet/pods/d9b1a5f6-106b-4c9e-a847-133b75cfaa94/volumes" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.764922 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-49ec-account-create-update-257w6"] Mar 07 08:10:51 crc kubenswrapper[4761]: W0307 08:10:51.776973 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod042bb2b8_9493_439c_85e3_bb2766db2135.slice/crio-5b47280cce153576ac42c05c5c61b6d4a708a135bb61a2f13f6a7167deec683e WatchSource:0}: Error finding container 5b47280cce153576ac42c05c5c61b6d4a708a135bb61a2f13f6a7167deec683e: Status 404 returned error can't find the container with id 5b47280cce153576ac42c05c5c61b6d4a708a135bb61a2f13f6a7167deec683e Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.784688 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.888382 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-ring-data-devices\") pod \"34132cc8-6037-4a17-9a58-5736caf6130b\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.888480 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-combined-ca-bundle\") pod \"34132cc8-6037-4a17-9a58-5736caf6130b\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.888518 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-scripts\") pod \"34132cc8-6037-4a17-9a58-5736caf6130b\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.888545 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34132cc8-6037-4a17-9a58-5736caf6130b-etc-swift\") pod \"34132cc8-6037-4a17-9a58-5736caf6130b\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.888664 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kllgt\" (UniqueName: \"kubernetes.io/projected/34132cc8-6037-4a17-9a58-5736caf6130b-kube-api-access-kllgt\") pod \"34132cc8-6037-4a17-9a58-5736caf6130b\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.888699 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-dispersionconf\") pod \"34132cc8-6037-4a17-9a58-5736caf6130b\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.888838 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-swiftconf\") pod \"34132cc8-6037-4a17-9a58-5736caf6130b\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.890600 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "34132cc8-6037-4a17-9a58-5736caf6130b" (UID: "34132cc8-6037-4a17-9a58-5736caf6130b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.890812 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34132cc8-6037-4a17-9a58-5736caf6130b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "34132cc8-6037-4a17-9a58-5736caf6130b" (UID: "34132cc8-6037-4a17-9a58-5736caf6130b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.895482 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34132cc8-6037-4a17-9a58-5736caf6130b-kube-api-access-kllgt" (OuterVolumeSpecName: "kube-api-access-kllgt") pod "34132cc8-6037-4a17-9a58-5736caf6130b" (UID: "34132cc8-6037-4a17-9a58-5736caf6130b"). InnerVolumeSpecName "kube-api-access-kllgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.898855 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "34132cc8-6037-4a17-9a58-5736caf6130b" (UID: "34132cc8-6037-4a17-9a58-5736caf6130b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.990833 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kllgt\" (UniqueName: \"kubernetes.io/projected/34132cc8-6037-4a17-9a58-5736caf6130b-kube-api-access-kllgt\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.990867 4761 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.990877 4761 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.990885 4761 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34132cc8-6037-4a17-9a58-5736caf6130b-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.023703 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-scripts" (OuterVolumeSpecName: "scripts") pod "34132cc8-6037-4a17-9a58-5736caf6130b" (UID: "34132cc8-6037-4a17-9a58-5736caf6130b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.093485 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.125560 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34132cc8-6037-4a17-9a58-5736caf6130b" (UID: "34132cc8-6037-4a17-9a58-5736caf6130b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.133101 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "34132cc8-6037-4a17-9a58-5736caf6130b" (UID: "34132cc8-6037-4a17-9a58-5736caf6130b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.195116 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.195144 4761 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.322123 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.322287 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jqk77" event={"ID":"34132cc8-6037-4a17-9a58-5736caf6130b","Type":"ContainerDied","Data":"279ce1b3aa96e47b801aba7e6ddb05970bd3015519986cd5516b58ff04cf8381"} Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.322565 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="279ce1b3aa96e47b801aba7e6ddb05970bd3015519986cd5516b58ff04cf8381" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.345204 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" event={"ID":"042bb2b8-9493-439c-85e3-bb2766db2135","Type":"ContainerStarted","Data":"10f5faab65c65733fae3ac0c1b8b365a3b145620a08013168d0ccf82c6a4bb89"} Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.345287 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" event={"ID":"042bb2b8-9493-439c-85e3-bb2766db2135","Type":"ContainerStarted","Data":"5b47280cce153576ac42c05c5c61b6d4a708a135bb61a2f13f6a7167deec683e"} Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.348999 4761 generic.go:334] "Generic (PLEG): container finished" podID="c1946466-f406-4073-96f8-cc6e66148293" containerID="d38fa90028b86d72ed68d38df6e216cf503a5d579c9cddea71be1aba3c5e65a2" exitCode=0 Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.349124 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" event={"ID":"c1946466-f406-4073-96f8-cc6e66148293","Type":"ContainerDied","Data":"d38fa90028b86d72ed68d38df6e216cf503a5d579c9cddea71be1aba3c5e65a2"} Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.349170 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" event={"ID":"c1946466-f406-4073-96f8-cc6e66148293","Type":"ContainerStarted","Data":"6ee4580af6bc65b5bd76d05f3934867ec935139e99a669fe6a2cce9cc8e30ab3"} Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.382241 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" podStartSLOduration=2.382218176 podStartE2EDuration="2.382218176s" podCreationTimestamp="2026-03-07 08:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:52.366458606 +0000 UTC m=+1309.275625091" watchObservedRunningTime="2026-03-07 08:10:52.382218176 +0000 UTC m=+1309.291384651" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.950257 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.955453 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.118925 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-operator-scripts\") pod \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.119031 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-469r2\" (UniqueName: \"kubernetes.io/projected/9eaf98b6-b097-4cbe-9815-835cd72b2616-kube-api-access-469r2\") pod \"9eaf98b6-b097-4cbe-9815-835cd72b2616\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.119195 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7jv7\" (UniqueName: \"kubernetes.io/projected/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-kube-api-access-p7jv7\") pod \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.119291 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaf98b6-b097-4cbe-9815-835cd72b2616-operator-scripts\") pod \"9eaf98b6-b097-4cbe-9815-835cd72b2616\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.119689 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c" (UID: "ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.120178 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eaf98b6-b097-4cbe-9815-835cd72b2616-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9eaf98b6-b097-4cbe-9815-835cd72b2616" (UID: "9eaf98b6-b097-4cbe-9815-835cd72b2616"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.124207 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-kube-api-access-p7jv7" (OuterVolumeSpecName: "kube-api-access-p7jv7") pod "ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c" (UID: "ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c"). InnerVolumeSpecName "kube-api-access-p7jv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.124314 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eaf98b6-b097-4cbe-9815-835cd72b2616-kube-api-access-469r2" (OuterVolumeSpecName: "kube-api-access-469r2") pod "9eaf98b6-b097-4cbe-9815-835cd72b2616" (UID: "9eaf98b6-b097-4cbe-9815-835cd72b2616"). InnerVolumeSpecName "kube-api-access-469r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.222357 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaf98b6-b097-4cbe-9815-835cd72b2616-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.222436 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.222456 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-469r2\" (UniqueName: \"kubernetes.io/projected/9eaf98b6-b097-4cbe-9815-835cd72b2616-kube-api-access-469r2\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.222478 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7jv7\" (UniqueName: \"kubernetes.io/projected/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-kube-api-access-p7jv7\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.361487 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tz9rv" event={"ID":"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c","Type":"ContainerDied","Data":"300ea83ed49ea0e24d20bf09818ef709c42e5d9ddfbd1777ad359d505a0bd39a"} Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.361557 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="300ea83ed49ea0e24d20bf09818ef709c42e5d9ddfbd1777ad359d505a0bd39a" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.361655 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.367054 4761 generic.go:334] "Generic (PLEG): container finished" podID="042bb2b8-9493-439c-85e3-bb2766db2135" containerID="10f5faab65c65733fae3ac0c1b8b365a3b145620a08013168d0ccf82c6a4bb89" exitCode=0 Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.367117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" event={"ID":"042bb2b8-9493-439c-85e3-bb2766db2135","Type":"ContainerDied","Data":"10f5faab65c65733fae3ac0c1b8b365a3b145620a08013168d0ccf82c6a4bb89"} Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.370299 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.370923 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a970-account-create-update-pkxzm" event={"ID":"9eaf98b6-b097-4cbe-9815-835cd72b2616","Type":"ContainerDied","Data":"1c44d24784465efa79095903693091246b0b4a9f57882f3598402b604a9039a4"} Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.370982 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c44d24784465efa79095903693091246b0b4a9f57882f3598402b604a9039a4" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.641349 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.821535 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.941449 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1946466-f406-4073-96f8-cc6e66148293-operator-scripts\") pod \"c1946466-f406-4073-96f8-cc6e66148293\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.941799 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc4sg\" (UniqueName: \"kubernetes.io/projected/c1946466-f406-4073-96f8-cc6e66148293-kube-api-access-rc4sg\") pod \"c1946466-f406-4073-96f8-cc6e66148293\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.941894 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1946466-f406-4073-96f8-cc6e66148293-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1946466-f406-4073-96f8-cc6e66148293" (UID: "c1946466-f406-4073-96f8-cc6e66148293"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.942450 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1946466-f406-4073-96f8-cc6e66148293-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.950623 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1946466-f406-4073-96f8-cc6e66148293-kube-api-access-rc4sg" (OuterVolumeSpecName: "kube-api-access-rc4sg") pod "c1946466-f406-4073-96f8-cc6e66148293" (UID: "c1946466-f406-4073-96f8-cc6e66148293"). InnerVolumeSpecName "kube-api-access-rc4sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.044652 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc4sg\" (UniqueName: \"kubernetes.io/projected/c1946466-f406-4073-96f8-cc6e66148293-kube-api-access-rc4sg\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.384499 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.384517 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" event={"ID":"c1946466-f406-4073-96f8-cc6e66148293","Type":"ContainerDied","Data":"6ee4580af6bc65b5bd76d05f3934867ec935139e99a669fe6a2cce9cc8e30ab3"} Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.384550 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ee4580af6bc65b5bd76d05f3934867ec935139e99a669fe6a2cce9cc8e30ab3" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.389629 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerStarted","Data":"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020"} Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.740876 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-nm5hz"] Mar 07 08:10:54 crc kubenswrapper[4761]: E0307 08:10:54.741446 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1946466-f406-4073-96f8-cc6e66148293" containerName="mariadb-database-create" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741469 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1946466-f406-4073-96f8-cc6e66148293" containerName="mariadb-database-create" Mar 07 08:10:54 crc kubenswrapper[4761]: E0307 08:10:54.741492 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eaf98b6-b097-4cbe-9815-835cd72b2616" containerName="mariadb-account-create-update" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741502 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eaf98b6-b097-4cbe-9815-835cd72b2616" containerName="mariadb-account-create-update" Mar 07 08:10:54 crc kubenswrapper[4761]: E0307 08:10:54.741517 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c" containerName="mariadb-database-create" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741527 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c" containerName="mariadb-database-create" Mar 07 08:10:54 crc kubenswrapper[4761]: E0307 08:10:54.741559 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34132cc8-6037-4a17-9a58-5736caf6130b" containerName="swift-ring-rebalance" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741573 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="34132cc8-6037-4a17-9a58-5736caf6130b" containerName="swift-ring-rebalance" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741843 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eaf98b6-b097-4cbe-9815-835cd72b2616" containerName="mariadb-account-create-update" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741861 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1946466-f406-4073-96f8-cc6e66148293" containerName="mariadb-database-create" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741885 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c" containerName="mariadb-database-create" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741902 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="34132cc8-6037-4a17-9a58-5736caf6130b" containerName="swift-ring-rebalance" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.743067 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.751520 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.768004 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nm5hz"] Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.864353 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwmfv\" (UniqueName: \"kubernetes.io/projected/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-kube-api-access-kwmfv\") pod \"root-account-create-update-nm5hz\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.864760 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-operator-scripts\") pod \"root-account-create-update-nm5hz\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.923368 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.966913 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-operator-scripts\") pod \"root-account-create-update-nm5hz\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.967041 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwmfv\" (UniqueName: \"kubernetes.io/projected/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-kube-api-access-kwmfv\") pod \"root-account-create-update-nm5hz\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.967987 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-operator-scripts\") pod \"root-account-create-update-nm5hz\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.003407 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwmfv\" (UniqueName: \"kubernetes.io/projected/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-kube-api-access-kwmfv\") pod \"root-account-create-update-nm5hz\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.068680 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brbhg\" (UniqueName: \"kubernetes.io/projected/042bb2b8-9493-439c-85e3-bb2766db2135-kube-api-access-brbhg\") pod \"042bb2b8-9493-439c-85e3-bb2766db2135\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.068818 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/042bb2b8-9493-439c-85e3-bb2766db2135-operator-scripts\") pod \"042bb2b8-9493-439c-85e3-bb2766db2135\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.069701 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/042bb2b8-9493-439c-85e3-bb2766db2135-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "042bb2b8-9493-439c-85e3-bb2766db2135" (UID: "042bb2b8-9493-439c-85e3-bb2766db2135"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.071740 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/042bb2b8-9493-439c-85e3-bb2766db2135-kube-api-access-brbhg" (OuterVolumeSpecName: "kube-api-access-brbhg") pod "042bb2b8-9493-439c-85e3-bb2766db2135" (UID: "042bb2b8-9493-439c-85e3-bb2766db2135"). InnerVolumeSpecName "kube-api-access-brbhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.080393 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.171208 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brbhg\" (UniqueName: \"kubernetes.io/projected/042bb2b8-9493-439c-85e3-bb2766db2135-kube-api-access-brbhg\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.171248 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/042bb2b8-9493-439c-85e3-bb2766db2135-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.403797 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" event={"ID":"042bb2b8-9493-439c-85e3-bb2766db2135","Type":"ContainerDied","Data":"5b47280cce153576ac42c05c5c61b6d4a708a135bb61a2f13f6a7167deec683e"} Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.403838 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b47280cce153576ac42c05c5c61b6d4a708a135bb61a2f13f6a7167deec683e" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.403888 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.552356 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nm5hz"] Mar 07 08:10:55 crc kubenswrapper[4761]: W0307 08:10:55.554460 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e8890dc_2bb1_4dd4_a12a_b550d87e9e1a.slice/crio-f767270a9a2a93e1501931c0a5313900d4589f80262314f02ca4c2e9170296b5 WatchSource:0}: Error finding container f767270a9a2a93e1501931c0a5313900d4589f80262314f02ca4c2e9170296b5: Status 404 returned error can't find the container with id f767270a9a2a93e1501931c0a5313900d4589f80262314f02ca4c2e9170296b5 Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.049787 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:10:56 crc kubenswrapper[4761]: E0307 08:10:56.053519 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042bb2b8-9493-439c-85e3-bb2766db2135" containerName="mariadb-account-create-update" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.053560 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="042bb2b8-9493-439c-85e3-bb2766db2135" containerName="mariadb-account-create-update" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.053917 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="042bb2b8-9493-439c-85e3-bb2766db2135" containerName="mariadb-account-create-update" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.055584 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.058636 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.068306 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.195928 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-config-data\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.196086 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.196190 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzvdk\" (UniqueName: \"kubernetes.io/projected/43e38c78-3b46-4182-bae7-aa8c4d9b909b-kube-api-access-zzvdk\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.297705 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.297845 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzvdk\" (UniqueName: \"kubernetes.io/projected/43e38c78-3b46-4182-bae7-aa8c4d9b909b-kube-api-access-zzvdk\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.297975 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-config-data\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.317682 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-config-data\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.317775 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.317840 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzvdk\" (UniqueName: \"kubernetes.io/projected/43e38c78-3b46-4182-bae7-aa8c4d9b909b-kube-api-access-zzvdk\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.375307 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.422579 4761 generic.go:334] "Generic (PLEG): container finished" podID="0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a" containerID="411930607eac514bd071597b40dd8906cabf45add842a076667a07a5d0a6cff5" exitCode=0 Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.422684 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nm5hz" event={"ID":"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a","Type":"ContainerDied","Data":"411930607eac514bd071597b40dd8906cabf45add842a076667a07a5d0a6cff5"} Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.422761 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nm5hz" event={"ID":"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a","Type":"ContainerStarted","Data":"f767270a9a2a93e1501931c0a5313900d4589f80262314f02ca4c2e9170296b5"} Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.425011 4761 generic.go:334] "Generic (PLEG): container finished" podID="49dec540-e872-432f-bffe-1b0380ac0082" containerID="9ee7ce9221a6be795722d6e5f52ae5f0c03c8d8b610024b67cfd95e5744149c2" exitCode=0 Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.425132 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49dec540-e872-432f-bffe-1b0380ac0082","Type":"ContainerDied","Data":"9ee7ce9221a6be795722d6e5f52ae5f0c03c8d8b610024b67cfd95e5744149c2"} Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.426899 4761 generic.go:334] "Generic (PLEG): container finished" podID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerID="1e506ba29675507705351ff4dddbabf2575095cb15dab3309deefdd45c364615" exitCode=0 Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.426960 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7201e0b2-1f44-45f0-b746-b98f8cb01f8f","Type":"ContainerDied","Data":"1e506ba29675507705351ff4dddbabf2575095cb15dab3309deefdd45c364615"} Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.430421 4761 generic.go:334] "Generic (PLEG): container finished" podID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerID="cac1f058abec00ed564c939ed9e3b5f26abb1b9f3f9688745486b048618d23c8" exitCode=0 Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.430468 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"663244dc-847b-4dda-9c2c-4cae23e48e64","Type":"ContainerDied","Data":"cac1f058abec00ed564c939ed9e3b5f26abb1b9f3f9688745486b048618d23c8"} Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.946287 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.154976 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.227355 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5qlzh"] Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.227579 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" podUID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerName="dnsmasq-dns" containerID="cri-o://18bd356b27523c6307038934611f40d3e730ca8eb63d6853e0975378361f0131" gracePeriod=10 Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.448374 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49dec540-e872-432f-bffe-1b0380ac0082","Type":"ContainerStarted","Data":"5888ac18ecadfb4983a3dc774d889f0a46c93806c8b965f02ef1b4898fdb22d2"} Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.448594 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.452500 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7201e0b2-1f44-45f0-b746-b98f8cb01f8f","Type":"ContainerStarted","Data":"29cb38754c06ba4cf8ad902c0d21b151c7ca626800f06ecaaa2ef264e60c35b1"} Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.455548 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.458265 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"663244dc-847b-4dda-9c2c-4cae23e48e64","Type":"ContainerStarted","Data":"1d26b75a698ad04d687e4077e53dc96a9d1ef67c0216076f5debf22ce97e1f0d"} Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.458868 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.464286 4761 generic.go:334] "Generic (PLEG): container finished" podID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerID="18bd356b27523c6307038934611f40d3e730ca8eb63d6853e0975378361f0131" exitCode=0 Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.464444 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" event={"ID":"6e8f6876-f4f5-429e-9908-9b890bd215f7","Type":"ContainerDied","Data":"18bd356b27523c6307038934611f40d3e730ca8eb63d6853e0975378361f0131"} Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.486530 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.520009382 podStartE2EDuration="1m4.48650745s" podCreationTimestamp="2026-03-07 08:09:53 +0000 UTC" firstStartedPulling="2026-03-07 08:09:56.426864417 +0000 UTC m=+1253.336030892" lastFinishedPulling="2026-03-07 08:10:22.393362485 +0000 UTC m=+1279.302528960" observedRunningTime="2026-03-07 08:10:57.481112523 +0000 UTC m=+1314.390278998" watchObservedRunningTime="2026-03-07 08:10:57.48650745 +0000 UTC m=+1314.395673925" Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.518504 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371972.33629 podStartE2EDuration="1m4.518485793s" podCreationTimestamp="2026-03-07 08:09:53 +0000 UTC" firstStartedPulling="2026-03-07 08:09:56.383122855 +0000 UTC m=+1253.292289330" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:57.513050785 +0000 UTC m=+1314.422217260" watchObservedRunningTime="2026-03-07 08:10:57.518485793 +0000 UTC m=+1314.427652268" Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.550384 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=40.10986318 podStartE2EDuration="1m4.550366763s" podCreationTimestamp="2026-03-07 08:09:53 +0000 UTC" firstStartedPulling="2026-03-07 08:09:56.34753344 +0000 UTC m=+1253.256699915" lastFinishedPulling="2026-03-07 08:10:20.788037003 +0000 UTC m=+1277.697203498" observedRunningTime="2026-03-07 08:10:57.547243714 +0000 UTC m=+1314.456410189" watchObservedRunningTime="2026-03-07 08:10:57.550366763 +0000 UTC m=+1314.459533238" Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.726996 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5dd9c59c48-q98tn" podUID="f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" containerName="console" containerID="cri-o://771fcc82e9e174ecccd0b64d2b97c51eaa31b3eb4f5e46854d449a2314c1bfae" gracePeriod=15 Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.435044 4761 patch_prober.go:28] interesting pod/console-5dd9c59c48-q98tn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.93:8443/health\": dial tcp 10.217.0.93:8443: connect: connection refused" start-of-body= Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.435102 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5dd9c59c48-q98tn" podUID="f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.93:8443/health\": dial tcp 10.217.0.93:8443: connect: connection refused" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.479238 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5dd9c59c48-q98tn_f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc/console/0.log" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.479532 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" containerID="771fcc82e9e174ecccd0b64d2b97c51eaa31b3eb4f5e46854d449a2314c1bfae" exitCode=2 Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.479633 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dd9c59c48-q98tn" event={"ID":"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc","Type":"ContainerDied","Data":"771fcc82e9e174ecccd0b64d2b97c51eaa31b3eb4f5e46854d449a2314c1bfae"} Mar 07 08:10:58 crc kubenswrapper[4761]: W0307 08:10:58.554094 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43e38c78_3b46_4182_bae7_aa8c4d9b909b.slice/crio-e7e2fc64d0a795eee67dac1c574da8a7568d40cfe8d5bd6830d080270a74b5b0 WatchSource:0}: Error finding container e7e2fc64d0a795eee67dac1c574da8a7568d40cfe8d5bd6830d080270a74b5b0: Status 404 returned error can't find the container with id e7e2fc64d0a795eee67dac1c574da8a7568d40cfe8d5bd6830d080270a74b5b0 Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.627804 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-g9w2m"] Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.629475 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.636318 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.636421 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zmqzm" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.649275 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-g9w2m"] Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.666274 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-db-sync-config-data\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.666338 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x75n7\" (UniqueName: \"kubernetes.io/projected/a990e713-634f-47c4-acbe-980ed66d30fe-kube-api-access-x75n7\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.666465 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-combined-ca-bundle\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.666526 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-config-data\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.699504 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.767619 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-operator-scripts\") pod \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.767848 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwmfv\" (UniqueName: \"kubernetes.io/projected/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-kube-api-access-kwmfv\") pod \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.768277 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-db-sync-config-data\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.768320 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x75n7\" (UniqueName: \"kubernetes.io/projected/a990e713-634f-47c4-acbe-980ed66d30fe-kube-api-access-x75n7\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.768448 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-combined-ca-bundle\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.768515 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-config-data\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.768513 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a" (UID: "0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.775902 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-kube-api-access-kwmfv" (OuterVolumeSpecName: "kube-api-access-kwmfv") pod "0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a" (UID: "0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a"). InnerVolumeSpecName "kube-api-access-kwmfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.780959 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-config-data\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.780979 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-combined-ca-bundle\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.797169 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x75n7\" (UniqueName: \"kubernetes.io/projected/a990e713-634f-47c4-acbe-980ed66d30fe-kube-api-access-x75n7\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.805349 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-db-sync-config-data\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.869982 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.870084 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwmfv\" (UniqueName: \"kubernetes.io/projected/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-kube-api-access-kwmfv\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.010286 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.135610 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5dd9c59c48-q98tn_f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc/console/0.log" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.135687 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.278690 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-oauth-config\") pod \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.279117 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-service-ca\") pod \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.279839 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-service-ca" (OuterVolumeSpecName: "service-ca") pod "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" (UID: "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.279914 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-serving-cert\") pod \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.280392 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-oauth-serving-cert\") pod \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.280459 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-config\") pod \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.280604 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-trusted-ca-bundle\") pod \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.280672 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzrm2\" (UniqueName: \"kubernetes.io/projected/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-kube-api-access-gzrm2\") pod \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.281407 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-config" (OuterVolumeSpecName: "console-config") pod "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" (UID: "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.281624 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" (UID: "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.281962 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" (UID: "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.282412 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.282546 4761 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.282568 4761 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.282580 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.284892 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" (UID: "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.288989 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" (UID: "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.289555 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-kube-api-access-gzrm2" (OuterVolumeSpecName: "kube-api-access-gzrm2") pod "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" (UID: "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc"). InnerVolumeSpecName "kube-api-access-gzrm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.317484 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.383814 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-dns-svc\") pod \"6e8f6876-f4f5-429e-9908-9b890bd215f7\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.384015 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjqrt\" (UniqueName: \"kubernetes.io/projected/6e8f6876-f4f5-429e-9908-9b890bd215f7-kube-api-access-mjqrt\") pod \"6e8f6876-f4f5-429e-9908-9b890bd215f7\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.384100 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-config\") pod \"6e8f6876-f4f5-429e-9908-9b890bd215f7\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.384578 4761 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.384592 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzrm2\" (UniqueName: \"kubernetes.io/projected/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-kube-api-access-gzrm2\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.384602 4761 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.397141 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e8f6876-f4f5-429e-9908-9b890bd215f7-kube-api-access-mjqrt" (OuterVolumeSpecName: "kube-api-access-mjqrt") pod "6e8f6876-f4f5-429e-9908-9b890bd215f7" (UID: "6e8f6876-f4f5-429e-9908-9b890bd215f7"). InnerVolumeSpecName "kube-api-access-mjqrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.440667 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6e8f6876-f4f5-429e-9908-9b890bd215f7" (UID: "6e8f6876-f4f5-429e-9908-9b890bd215f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.447029 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-config" (OuterVolumeSpecName: "config") pod "6e8f6876-f4f5-429e-9908-9b890bd215f7" (UID: "6e8f6876-f4f5-429e-9908-9b890bd215f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.486641 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.486675 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjqrt\" (UniqueName: \"kubernetes.io/projected/6e8f6876-f4f5-429e-9908-9b890bd215f7-kube-api-access-mjqrt\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.486687 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.494671 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"43e38c78-3b46-4182-bae7-aa8c4d9b909b","Type":"ContainerStarted","Data":"e7e2fc64d0a795eee67dac1c574da8a7568d40cfe8d5bd6830d080270a74b5b0"} Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.498893 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerStarted","Data":"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed"} Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.501533 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nm5hz" event={"ID":"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a","Type":"ContainerDied","Data":"f767270a9a2a93e1501931c0a5313900d4589f80262314f02ca4c2e9170296b5"} Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.501587 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f767270a9a2a93e1501931c0a5313900d4589f80262314f02ca4c2e9170296b5" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.501672 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.506710 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" event={"ID":"6e8f6876-f4f5-429e-9908-9b890bd215f7","Type":"ContainerDied","Data":"1580f62ed4b835efa056d38801500a326b1b466902057979960f0cd6384ef03c"} Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.506773 4761 scope.go:117] "RemoveContainer" containerID="18bd356b27523c6307038934611f40d3e730ca8eb63d6853e0975378361f0131" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.506882 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.513010 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5dd9c59c48-q98tn_f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc/console/0.log" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.513070 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dd9c59c48-q98tn" event={"ID":"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc","Type":"ContainerDied","Data":"f3ceda7127d4a5ed6071b386f7c8619bc08af08837dc47cb8e39f89c79cb88f3"} Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.513149 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.566638 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=24.157618036 podStartE2EDuration="59.566621769s" podCreationTimestamp="2026-03-07 08:10:00 +0000 UTC" firstStartedPulling="2026-03-07 08:10:23.311074509 +0000 UTC m=+1280.220240984" lastFinishedPulling="2026-03-07 08:10:58.720078242 +0000 UTC m=+1315.629244717" observedRunningTime="2026-03-07 08:10:59.56586749 +0000 UTC m=+1316.475033965" watchObservedRunningTime="2026-03-07 08:10:59.566621769 +0000 UTC m=+1316.475788244" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.605434 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5qlzh"] Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.621811 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5qlzh"] Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.641064 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5dd9c59c48-q98tn"] Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.652431 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5dd9c59c48-q98tn"] Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.751243 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e8f6876-f4f5-429e-9908-9b890bd215f7" path="/var/lib/kubelet/pods/6e8f6876-f4f5-429e-9908-9b890bd215f7/volumes" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.752165 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" path="/var/lib/kubelet/pods/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc/volumes" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.788856 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-g9w2m"] Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.953632 4761 scope.go:117] "RemoveContainer" containerID="a71c7c3a354307f54d5910f5284820373d2ba892b20b40983d41a6a146a44c75" Mar 07 08:11:00 crc kubenswrapper[4761]: I0307 08:11:00.013306 4761 scope.go:117] "RemoveContainer" containerID="771fcc82e9e174ecccd0b64d2b97c51eaa31b3eb4f5e46854d449a2314c1bfae" Mar 07 08:11:00 crc kubenswrapper[4761]: I0307 08:11:00.521765 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-g9w2m" event={"ID":"a990e713-634f-47c4-acbe-980ed66d30fe","Type":"ContainerStarted","Data":"cee1ba9976056eb67ee81744b62970e36b05720039ac0e14a5708002d899744d"} Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.143098 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nm5hz"] Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.154636 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-nm5hz"] Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.534285 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"43e38c78-3b46-4182-bae7-aa8c4d9b909b","Type":"ContainerStarted","Data":"790c4ccb2b2bb73e6a2faf2a7ff889dee3ae87ca4c2382aa000143aa0c34cafb"} Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.557146 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.035258443 podStartE2EDuration="5.557127721s" podCreationTimestamp="2026-03-07 08:10:56 +0000 UTC" firstStartedPulling="2026-03-07 08:10:58.557035198 +0000 UTC m=+1315.466201683" lastFinishedPulling="2026-03-07 08:11:01.078904446 +0000 UTC m=+1317.988070961" observedRunningTime="2026-03-07 08:11:01.546545492 +0000 UTC m=+1318.455711967" watchObservedRunningTime="2026-03-07 08:11:01.557127721 +0000 UTC m=+1318.466294196" Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.660452 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.661814 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.668406 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.715899 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a" path="/var/lib/kubelet/pods/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a/volumes" Mar 07 08:11:02 crc kubenswrapper[4761]: I0307 08:11:02.180576 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 07 08:11:02 crc kubenswrapper[4761]: I0307 08:11:02.554666 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:02 crc kubenswrapper[4761]: I0307 08:11:02.673572 4761 scope.go:117] "RemoveContainer" containerID="25b083a88820e0eed141fdd41201c4a079e70c052e7ce84fcef6154729306ab1" Mar 07 08:11:03 crc kubenswrapper[4761]: I0307 08:11:03.088390 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wq5n6" podUID="9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d" containerName="ovn-controller" probeResult="failure" output=< Mar 07 08:11:03 crc kubenswrapper[4761]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 07 08:11:03 crc kubenswrapper[4761]: > Mar 07 08:11:03 crc kubenswrapper[4761]: I0307 08:11:03.111996 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:11:03 crc kubenswrapper[4761]: I0307 08:11:03.402661 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:11:03 crc kubenswrapper[4761]: I0307 08:11:03.419731 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:11:03 crc kubenswrapper[4761]: I0307 08:11:03.570266 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.313533 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.569771 4761 generic.go:334] "Generic (PLEG): container finished" podID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerID="89a6b5588731808b0bfe82c5f4e9ce1720f8b54e7fe66d37411578cd9536d97b" exitCode=0 Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.569811 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc2f3dec-2838-4d30-93c2-631da252cdb7","Type":"ContainerDied","Data":"89a6b5588731808b0bfe82c5f4e9ce1720f8b54e7fe66d37411578cd9536d97b"} Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.571530 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"263558fd897cd107485ab30aa43570eb5c0d86efe6ef28d9f25e532b914f8a16"} Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.583283 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.796540 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hgl7h"] Mar 07 08:11:04 crc kubenswrapper[4761]: E0307 08:11:04.796933 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" containerName="console" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.796949 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" containerName="console" Mar 07 08:11:04 crc kubenswrapper[4761]: E0307 08:11:04.796966 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerName="dnsmasq-dns" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.796973 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerName="dnsmasq-dns" Mar 07 08:11:04 crc kubenswrapper[4761]: E0307 08:11:04.797010 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a" containerName="mariadb-account-create-update" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.797016 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a" containerName="mariadb-account-create-update" Mar 07 08:11:04 crc kubenswrapper[4761]: E0307 08:11:04.797025 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerName="init" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.797030 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerName="init" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.797211 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" containerName="console" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.797231 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a" containerName="mariadb-account-create-update" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.797243 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerName="dnsmasq-dns" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.798115 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.801043 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.826302 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hgl7h"] Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.840753 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-operator-scripts\") pod \"root-account-create-update-hgl7h\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.840866 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2lwk\" (UniqueName: \"kubernetes.io/projected/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-kube-api-access-k2lwk\") pod \"root-account-create-update-hgl7h\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.942032 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2lwk\" (UniqueName: \"kubernetes.io/projected/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-kube-api-access-k2lwk\") pod \"root-account-create-update-hgl7h\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.942167 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-operator-scripts\") pod \"root-account-create-update-hgl7h\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.943346 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-operator-scripts\") pod \"root-account-create-update-hgl7h\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.974787 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2lwk\" (UniqueName: \"kubernetes.io/projected/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-kube-api-access-k2lwk\") pod \"root-account-create-update-hgl7h\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.117551 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.582319 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="prometheus" containerID="cri-o://f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee" gracePeriod=600 Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.582700 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc2f3dec-2838-4d30-93c2-631da252cdb7","Type":"ContainerStarted","Data":"818287b0f8f3f1d44f2a907bb97c9168062fe658aaa3193d97412871fa4ab3f8"} Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.583030 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="config-reloader" containerID="cri-o://acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020" gracePeriod=600 Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.583147 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="thanos-sidecar" containerID="cri-o://a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed" gracePeriod=600 Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.583512 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.606142 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hgl7h"] Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.622906 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371964.231888 podStartE2EDuration="1m12.622888739s" podCreationTimestamp="2026-03-07 08:09:53 +0000 UTC" firstStartedPulling="2026-03-07 08:09:56.580875151 +0000 UTC m=+1253.490041626" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:05.614458674 +0000 UTC m=+1322.523625159" watchObservedRunningTime="2026-03-07 08:11:05.622888739 +0000 UTC m=+1322.532055214" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.578631 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.603917 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hgl7h" event={"ID":"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8","Type":"ContainerStarted","Data":"a3b1a9637d6c680134f028c0b657f4d1920c25e24ee30a7a62adf8d224b1cdc5"} Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.603971 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hgl7h" event={"ID":"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8","Type":"ContainerStarted","Data":"630a45ad24700283138aa53d6c857c53d4701fdaec4dde0e7f7c75c1715c2067"} Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618561 4761 generic.go:334] "Generic (PLEG): container finished" podID="af7db490-ce95-4946-b358-c248703a4a53" containerID="a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed" exitCode=0 Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618596 4761 generic.go:334] "Generic (PLEG): container finished" podID="af7db490-ce95-4946-b358-c248703a4a53" containerID="acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020" exitCode=0 Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618609 4761 generic.go:334] "Generic (PLEG): container finished" podID="af7db490-ce95-4946-b358-c248703a4a53" containerID="f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee" exitCode=0 Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618632 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerDied","Data":"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed"} Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618685 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerDied","Data":"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020"} Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618702 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerDied","Data":"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee"} Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618812 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerDied","Data":"c763d26a506e8b9f5808c71df3c7678c3fb50676b34ea74d7614233d21c5de8d"} Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618839 4761 scope.go:117] "RemoveContainer" containerID="a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618865 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.663682 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-hgl7h" podStartSLOduration=2.66361426 podStartE2EDuration="2.66361426s" podCreationTimestamp="2026-03-07 08:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:06.624028944 +0000 UTC m=+1323.533195419" watchObservedRunningTime="2026-03-07 08:11:06.66361426 +0000 UTC m=+1323.572780735" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676024 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-2\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676084 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-1\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676156 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-thanos-prometheus-http-client-file\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676192 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-tls-assets\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676219 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af7db490-ce95-4946-b358-c248703a4a53-config-out\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676335 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676402 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-0\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676494 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-web-config\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676581 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-config\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676639 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g8mx\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-kube-api-access-9g8mx\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676817 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.677352 4761 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.677657 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.680052 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.682843 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af7db490-ce95-4946-b358-c248703a4a53-config-out" (OuterVolumeSpecName: "config-out") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.683459 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.687253 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-config" (OuterVolumeSpecName: "config") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.689582 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-kube-api-access-9g8mx" (OuterVolumeSpecName: "kube-api-access-9g8mx") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "kube-api-access-9g8mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.691227 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.721498 4761 scope.go:117] "RemoveContainer" containerID="acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.729844 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.732269 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-web-config" (OuterVolumeSpecName: "web-config") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.780562 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.780608 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g8mx\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-kube-api-access-9g8mx\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.786826 4761 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.786882 4761 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.786906 4761 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.786917 4761 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af7db490-ce95-4946-b358-c248703a4a53-config-out\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.786942 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") on node \"crc\" " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.786954 4761 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.786964 4761 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-web-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.807511 4761 scope.go:117] "RemoveContainer" containerID="f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.818517 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.818661 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c") on node "crc" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.836925 4761 scope.go:117] "RemoveContainer" containerID="56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.864994 4761 scope.go:117] "RemoveContainer" containerID="a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed" Mar 07 08:11:06 crc kubenswrapper[4761]: E0307 08:11:06.865547 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed\": container with ID starting with a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed not found: ID does not exist" containerID="a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.865593 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed"} err="failed to get container status \"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed\": rpc error: code = NotFound desc = could not find container \"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed\": container with ID starting with a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.865631 4761 scope.go:117] "RemoveContainer" containerID="acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020" Mar 07 08:11:06 crc kubenswrapper[4761]: E0307 08:11:06.866160 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020\": container with ID starting with acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020 not found: ID does not exist" containerID="acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.866217 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020"} err="failed to get container status \"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020\": rpc error: code = NotFound desc = could not find container \"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020\": container with ID starting with acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020 not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.866239 4761 scope.go:117] "RemoveContainer" containerID="f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee" Mar 07 08:11:06 crc kubenswrapper[4761]: E0307 08:11:06.866698 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee\": container with ID starting with f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee not found: ID does not exist" containerID="f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.866748 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee"} err="failed to get container status \"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee\": rpc error: code = NotFound desc = could not find container \"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee\": container with ID starting with f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.866770 4761 scope.go:117] "RemoveContainer" containerID="56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5" Mar 07 08:11:06 crc kubenswrapper[4761]: E0307 08:11:06.867184 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5\": container with ID starting with 56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5 not found: ID does not exist" containerID="56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.867222 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5"} err="failed to get container status \"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5\": rpc error: code = NotFound desc = could not find container \"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5\": container with ID starting with 56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5 not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.867241 4761 scope.go:117] "RemoveContainer" containerID="a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.868346 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed"} err="failed to get container status \"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed\": rpc error: code = NotFound desc = could not find container \"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed\": container with ID starting with a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.868378 4761 scope.go:117] "RemoveContainer" containerID="acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.868964 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020"} err="failed to get container status \"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020\": rpc error: code = NotFound desc = could not find container \"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020\": container with ID starting with acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020 not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.868996 4761 scope.go:117] "RemoveContainer" containerID="f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.870345 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee"} err="failed to get container status \"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee\": rpc error: code = NotFound desc = could not find container \"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee\": container with ID starting with f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.870373 4761 scope.go:117] "RemoveContainer" containerID="56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.870913 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5"} err="failed to get container status \"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5\": rpc error: code = NotFound desc = could not find container \"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5\": container with ID starting with 56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5 not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.870943 4761 scope.go:117] "RemoveContainer" containerID="a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.871159 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed"} err="failed to get container status \"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed\": rpc error: code = NotFound desc = could not find container \"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed\": container with ID starting with a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.871206 4761 scope.go:117] "RemoveContainer" containerID="acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.871997 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020"} err="failed to get container status \"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020\": rpc error: code = NotFound desc = could not find container \"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020\": container with ID starting with acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020 not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.872022 4761 scope.go:117] "RemoveContainer" containerID="f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.873820 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee"} err="failed to get container status \"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee\": rpc error: code = NotFound desc = could not find container \"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee\": container with ID starting with f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.873847 4761 scope.go:117] "RemoveContainer" containerID="56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.874291 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5"} err="failed to get container status \"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5\": rpc error: code = NotFound desc = could not find container \"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5\": container with ID starting with 56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5 not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.889215 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.971262 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.986556 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.027816 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:11:07 crc kubenswrapper[4761]: E0307 08:11:07.028243 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="prometheus" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.028261 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="prometheus" Mar 07 08:11:07 crc kubenswrapper[4761]: E0307 08:11:07.028293 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="init-config-reloader" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.028300 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="init-config-reloader" Mar 07 08:11:07 crc kubenswrapper[4761]: E0307 08:11:07.028311 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="thanos-sidecar" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.028318 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="thanos-sidecar" Mar 07 08:11:07 crc kubenswrapper[4761]: E0307 08:11:07.028342 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="config-reloader" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.028350 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="config-reloader" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.028513 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="thanos-sidecar" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.028526 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="prometheus" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.028548 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="config-reloader" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.030524 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.034335 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.034691 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.034774 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.034903 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.035114 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.035282 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bct6h" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.036496 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.036534 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.041674 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.098303 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.199576 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.199633 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.199669 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-config\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.199708 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.199890 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.199946 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.200036 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.200132 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz7hz\" (UniqueName: \"kubernetes.io/projected/526b9328-0f86-4c3d-9a27-116742cee11a-kube-api-access-bz7hz\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.200165 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.200225 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.200246 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/526b9328-0f86-4c3d-9a27-116742cee11a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.200294 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/526b9328-0f86-4c3d-9a27-116742cee11a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.200418 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302121 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302159 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/526b9328-0f86-4c3d-9a27-116742cee11a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302206 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/526b9328-0f86-4c3d-9a27-116742cee11a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302228 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302285 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302310 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302350 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-config\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302375 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302428 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302453 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302508 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302552 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz7hz\" (UniqueName: \"kubernetes.io/projected/526b9328-0f86-4c3d-9a27-116742cee11a-kube-api-access-bz7hz\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302588 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.303572 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.304788 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.305035 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.306144 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-config\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.306434 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.306613 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.308230 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.308258 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0efec040dc2ef2408d0699e8dc67045c63207730fe365a5f7d021c687807de92/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.313688 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/526b9328-0f86-4c3d-9a27-116742cee11a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.314065 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/526b9328-0f86-4c3d-9a27-116742cee11a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.314135 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.314210 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.314507 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.328630 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz7hz\" (UniqueName: \"kubernetes.io/projected/526b9328-0f86-4c3d-9a27-116742cee11a-kube-api-access-bz7hz\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.371922 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.397737 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.649812 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"dafbcfb60e0e988f6a221d564a7e54349ae7bbca9b2dac1e87b40f7e482c4973"} Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.650215 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"62bf96a91f784de21a48285eaa21964d9363cd6bc8d139ef921624a71a850898"} Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.654538 4761 generic.go:334] "Generic (PLEG): container finished" podID="202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8" containerID="a3b1a9637d6c680134f028c0b657f4d1920c25e24ee30a7a62adf8d224b1cdc5" exitCode=0 Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.654586 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hgl7h" event={"ID":"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8","Type":"ContainerDied","Data":"a3b1a9637d6c680134f028c0b657f4d1920c25e24ee30a7a62adf8d224b1cdc5"} Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.725829 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af7db490-ce95-4946-b358-c248703a4a53" path="/var/lib/kubelet/pods/af7db490-ce95-4946-b358-c248703a4a53/volumes" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.943516 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.147275 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wq5n6" podUID="9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d" containerName="ovn-controller" probeResult="failure" output=< Mar 07 08:11:08 crc kubenswrapper[4761]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 07 08:11:08 crc kubenswrapper[4761]: > Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.151209 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.369860 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wq5n6-config-gfqbn"] Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.371870 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.375949 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.377474 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wq5n6-config-gfqbn"] Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.429204 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8c5n\" (UniqueName: \"kubernetes.io/projected/bd31dd40-d30a-4680-b1c3-0886cf7678df-kube-api-access-h8c5n\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.429262 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-additional-scripts\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.429353 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run-ovn\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.429390 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-scripts\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.429421 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-log-ovn\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.429465 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531389 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-scripts\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531446 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-log-ovn\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531502 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531554 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8c5n\" (UniqueName: \"kubernetes.io/projected/bd31dd40-d30a-4680-b1c3-0886cf7678df-kube-api-access-h8c5n\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531598 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-additional-scripts\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531661 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run-ovn\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531854 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run-ovn\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531871 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531873 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-log-ovn\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.532320 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-additional-scripts\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.533252 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-scripts\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.551355 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8c5n\" (UniqueName: \"kubernetes.io/projected/bd31dd40-d30a-4680-b1c3-0886cf7678df-kube-api-access-h8c5n\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.666595 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"1b103734f166e33f860527dc064b50a1a7b8a414a55d901313797731a2db980b"} Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.667483 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"ea02e32d6832911e949949a14f4f25d6ad4303430470a7d08a0f36ac86be1ded"} Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.668253 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"526b9328-0f86-4c3d-9a27-116742cee11a","Type":"ContainerStarted","Data":"b8ea649c864cd948136e4536b2bce9025a79832c133a48788f6033e24517438b"} Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.698508 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:11 crc kubenswrapper[4761]: I0307 08:11:11.723395 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"526b9328-0f86-4c3d-9a27-116742cee11a","Type":"ContainerStarted","Data":"e7613d1617c5bdd57f7de839816978aa6c9b6aa0f00e54e2ecd7505bd3f3ed9c"} Mar 07 08:11:13 crc kubenswrapper[4761]: I0307 08:11:13.080144 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wq5n6" podUID="9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d" containerName="ovn-controller" probeResult="failure" output=< Mar 07 08:11:13 crc kubenswrapper[4761]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 07 08:11:13 crc kubenswrapper[4761]: > Mar 07 08:11:13 crc kubenswrapper[4761]: I0307 08:11:13.771367 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:11:13 crc kubenswrapper[4761]: I0307 08:11:13.771432 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.098927 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.191538 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.212906 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.228906 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.242598 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.396139 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-operator-scripts\") pod \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.396216 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2lwk\" (UniqueName: \"kubernetes.io/projected/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-kube-api-access-k2lwk\") pod \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.397756 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8" (UID: "202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.403079 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-kube-api-access-k2lwk" (OuterVolumeSpecName: "kube-api-access-k2lwk") pod "202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8" (UID: "202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8"). InnerVolumeSpecName "kube-api-access-k2lwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.494245 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wq5n6-config-gfqbn"] Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.498687 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.498765 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2lwk\" (UniqueName: \"kubernetes.io/projected/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-kube-api-access-k2lwk\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:15 crc kubenswrapper[4761]: W0307 08:11:15.503566 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd31dd40_d30a_4680_b1c3_0886cf7678df.slice/crio-7e562e3d9575810643f68451208d8c7ba1469727e5da7859814a4440a05defe4 WatchSource:0}: Error finding container 7e562e3d9575810643f68451208d8c7ba1469727e5da7859814a4440a05defe4: Status 404 returned error can't find the container with id 7e562e3d9575810643f68451208d8c7ba1469727e5da7859814a4440a05defe4 Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.769587 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-g9w2m" event={"ID":"a990e713-634f-47c4-acbe-980ed66d30fe","Type":"ContainerStarted","Data":"fe7c46f93fcb404a48fdfddcf53140cbe34999481e23b77955840ad956bcf535"} Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.774358 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hgl7h" event={"ID":"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8","Type":"ContainerDied","Data":"630a45ad24700283138aa53d6c857c53d4701fdaec4dde0e7f7c75c1715c2067"} Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.774387 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="630a45ad24700283138aa53d6c857c53d4701fdaec4dde0e7f7c75c1715c2067" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.774399 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.777868 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wq5n6-config-gfqbn" event={"ID":"bd31dd40-d30a-4680-b1c3-0886cf7678df","Type":"ContainerStarted","Data":"7e562e3d9575810643f68451208d8c7ba1469727e5da7859814a4440a05defe4"} Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.790123 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-g9w2m" podStartSLOduration=2.681489945 podStartE2EDuration="17.790101053s" podCreationTimestamp="2026-03-07 08:10:58 +0000 UTC" firstStartedPulling="2026-03-07 08:10:59.953619705 +0000 UTC m=+1316.862786180" lastFinishedPulling="2026-03-07 08:11:15.062230813 +0000 UTC m=+1331.971397288" observedRunningTime="2026-03-07 08:11:15.783648589 +0000 UTC m=+1332.692815064" watchObservedRunningTime="2026-03-07 08:11:15.790101053 +0000 UTC m=+1332.699267528" Mar 07 08:11:16 crc kubenswrapper[4761]: I0307 08:11:16.788503 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"4b0c1efde4b9dfab021b15d5bcc07a1c990aab97bf53550f39c7dedb5ade26c3"} Mar 07 08:11:16 crc kubenswrapper[4761]: I0307 08:11:16.789030 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"47a49b2bb5fe5fc077d3496d6ea087d19e34567cd23d7e1dde74ca72a2f2328f"} Mar 07 08:11:16 crc kubenswrapper[4761]: I0307 08:11:16.789045 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"f6dc09b76805de0617802aea0196baa7bbae94107693e3d603c7424fa8659d07"} Mar 07 08:11:16 crc kubenswrapper[4761]: I0307 08:11:16.790726 4761 generic.go:334] "Generic (PLEG): container finished" podID="bd31dd40-d30a-4680-b1c3-0886cf7678df" containerID="678c4a7b6bb0f4d19eafaa5654456b93d7c9f779bbb622caf1e8268648186ea9" exitCode=0 Mar 07 08:11:16 crc kubenswrapper[4761]: I0307 08:11:16.790954 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wq5n6-config-gfqbn" event={"ID":"bd31dd40-d30a-4680-b1c3-0886cf7678df","Type":"ContainerDied","Data":"678c4a7b6bb0f4d19eafaa5654456b93d7c9f779bbb622caf1e8268648186ea9"} Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.380733 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-wnw7q"] Mar 07 08:11:17 crc kubenswrapper[4761]: E0307 08:11:17.381424 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8" containerName="mariadb-account-create-update" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.381440 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8" containerName="mariadb-account-create-update" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.381651 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8" containerName="mariadb-account-create-update" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.382402 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.391412 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-wnw7q"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.539709 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-operator-scripts\") pod \"heat-db-create-wnw7q\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.539873 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqd8c\" (UniqueName: \"kubernetes.io/projected/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-kube-api-access-jqd8c\") pod \"heat-db-create-wnw7q\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.589740 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-3014-account-create-update-gtc26"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.591002 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.593374 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.604623 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3014-account-create-update-gtc26"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.642675 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-operator-scripts\") pod \"heat-db-create-wnw7q\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.642893 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqd8c\" (UniqueName: \"kubernetes.io/projected/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-kube-api-access-jqd8c\") pod \"heat-db-create-wnw7q\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.643578 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-operator-scripts\") pod \"heat-db-create-wnw7q\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.683634 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-hbnpl"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.684992 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.694611 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqd8c\" (UniqueName: \"kubernetes.io/projected/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-kube-api-access-jqd8c\") pod \"heat-db-create-wnw7q\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.703629 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.744321 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-operator-scripts\") pod \"heat-3014-account-create-update-gtc26\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.744412 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcw9w\" (UniqueName: \"kubernetes.io/projected/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-kube-api-access-tcw9w\") pod \"heat-3014-account-create-update-gtc26\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.777676 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hbnpl"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.777741 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-17dd-account-create-update-fwfjn"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.787165 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.790606 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.843879 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-17dd-account-create-update-fwfjn"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.846664 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52ac8e30-44e2-48ba-8272-112bb012a7e2-operator-scripts\") pod \"cinder-db-create-hbnpl\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.846782 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-operator-scripts\") pod \"heat-3014-account-create-update-gtc26\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.846823 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knkv2\" (UniqueName: \"kubernetes.io/projected/52ac8e30-44e2-48ba-8272-112bb012a7e2-kube-api-access-knkv2\") pod \"cinder-db-create-hbnpl\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.846876 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcw9w\" (UniqueName: \"kubernetes.io/projected/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-kube-api-access-tcw9w\") pod \"heat-3014-account-create-update-gtc26\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.856668 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-operator-scripts\") pod \"heat-3014-account-create-update-gtc26\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.866133 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-pf6dj"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.867480 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.879980 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"89c10df343cc547b94d58283f6e3ee290952c6db7d1818efb34ac75f63e2bf09"} Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.895487 4761 generic.go:334] "Generic (PLEG): container finished" podID="526b9328-0f86-4c3d-9a27-116742cee11a" containerID="e7613d1617c5bdd57f7de839816978aa6c9b6aa0f00e54e2ecd7505bd3f3ed9c" exitCode=0 Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.895770 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"526b9328-0f86-4c3d-9a27-116742cee11a","Type":"ContainerDied","Data":"e7613d1617c5bdd57f7de839816978aa6c9b6aa0f00e54e2ecd7505bd3f3ed9c"} Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.899807 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-pf6dj"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.927908 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcw9w\" (UniqueName: \"kubernetes.io/projected/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-kube-api-access-tcw9w\") pod \"heat-3014-account-create-update-gtc26\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.943842 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-tctqn"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.945226 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.949708 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcj6z\" (UniqueName: \"kubernetes.io/projected/92bbc752-8315-47e4-993a-db9de1da8c87-kube-api-access-kcj6z\") pod \"barbican-db-create-pf6dj\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.949843 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b359be0-899b-479e-ac6c-1ed4422b7da8-operator-scripts\") pod \"cinder-17dd-account-create-update-fwfjn\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.949939 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52ac8e30-44e2-48ba-8272-112bb012a7e2-operator-scripts\") pod \"cinder-db-create-hbnpl\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.950000 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz8zl\" (UniqueName: \"kubernetes.io/projected/6b359be0-899b-479e-ac6c-1ed4422b7da8-kube-api-access-pz8zl\") pod \"cinder-17dd-account-create-update-fwfjn\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.950062 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knkv2\" (UniqueName: \"kubernetes.io/projected/52ac8e30-44e2-48ba-8272-112bb012a7e2-kube-api-access-knkv2\") pod \"cinder-db-create-hbnpl\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.950089 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bbc752-8315-47e4-993a-db9de1da8c87-operator-scripts\") pod \"barbican-db-create-pf6dj\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.950787 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52ac8e30-44e2-48ba-8272-112bb012a7e2-operator-scripts\") pod \"cinder-db-create-hbnpl\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.951372 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.951568 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.951835 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.952026 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pgh8w" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.962182 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.965647 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tctqn"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.996676 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-eedb-account-create-update-wc6wq"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.998150 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.006013 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.008415 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knkv2\" (UniqueName: \"kubernetes.io/projected/52ac8e30-44e2-48ba-8272-112bb012a7e2-kube-api-access-knkv2\") pod \"cinder-db-create-hbnpl\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.011892 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-eedb-account-create-update-wc6wq"] Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.049773 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mdw2w"] Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.053708 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-combined-ca-bundle\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.073386 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p676r\" (UniqueName: \"kubernetes.io/projected/15e98bf9-0ded-4a61-b436-1f652f69e599-kube-api-access-p676r\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.074149 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b359be0-899b-479e-ac6c-1ed4422b7da8-operator-scripts\") pod \"cinder-17dd-account-create-update-fwfjn\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.078507 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b359be0-899b-479e-ac6c-1ed4422b7da8-operator-scripts\") pod \"cinder-17dd-account-create-update-fwfjn\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.085260 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz8zl\" (UniqueName: \"kubernetes.io/projected/6b359be0-899b-479e-ac6c-1ed4422b7da8-kube-api-access-pz8zl\") pod \"cinder-17dd-account-create-update-fwfjn\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.085484 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bbc752-8315-47e4-993a-db9de1da8c87-operator-scripts\") pod \"barbican-db-create-pf6dj\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.085560 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-config-data\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.085617 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcj6z\" (UniqueName: \"kubernetes.io/projected/92bbc752-8315-47e4-993a-db9de1da8c87-kube-api-access-kcj6z\") pod \"barbican-db-create-pf6dj\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.087165 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bbc752-8315-47e4-993a-db9de1da8c87-operator-scripts\") pod \"barbican-db-create-pf6dj\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.092189 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.091292 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mdw2w"] Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.133617 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcj6z\" (UniqueName: \"kubernetes.io/projected/92bbc752-8315-47e4-993a-db9de1da8c87-kube-api-access-kcj6z\") pod \"barbican-db-create-pf6dj\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.143820 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.147406 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz8zl\" (UniqueName: \"kubernetes.io/projected/6b359be0-899b-479e-ac6c-1ed4422b7da8-kube-api-access-pz8zl\") pod \"cinder-17dd-account-create-update-fwfjn\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.183203 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-wq5n6" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.187634 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fdkh\" (UniqueName: \"kubernetes.io/projected/47e8c767-31e1-4609-8c1f-b62577164637-kube-api-access-8fdkh\") pod \"barbican-eedb-account-create-update-wc6wq\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.187705 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-combined-ca-bundle\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.187766 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p676r\" (UniqueName: \"kubernetes.io/projected/15e98bf9-0ded-4a61-b436-1f652f69e599-kube-api-access-p676r\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.187882 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tr99\" (UniqueName: \"kubernetes.io/projected/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-kube-api-access-4tr99\") pod \"neutron-db-create-mdw2w\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.187912 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-operator-scripts\") pod \"neutron-db-create-mdw2w\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.187934 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47e8c767-31e1-4609-8c1f-b62577164637-operator-scripts\") pod \"barbican-eedb-account-create-update-wc6wq\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.187983 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-config-data\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.192978 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-config-data\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.199926 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-combined-ca-bundle\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.217207 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.218657 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p676r\" (UniqueName: \"kubernetes.io/projected/15e98bf9-0ded-4a61-b436-1f652f69e599-kube-api-access-p676r\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.248810 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-736f-account-create-update-jjxjx"] Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.251323 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.258636 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.276040 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-736f-account-create-update-jjxjx"] Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.292817 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tr99\" (UniqueName: \"kubernetes.io/projected/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-kube-api-access-4tr99\") pod \"neutron-db-create-mdw2w\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.293309 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-operator-scripts\") pod \"neutron-db-create-mdw2w\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.294311 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-operator-scripts\") pod \"neutron-db-create-mdw2w\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.293342 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47e8c767-31e1-4609-8c1f-b62577164637-operator-scripts\") pod \"barbican-eedb-account-create-update-wc6wq\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.295082 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47e8c767-31e1-4609-8c1f-b62577164637-operator-scripts\") pod \"barbican-eedb-account-create-update-wc6wq\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.297331 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fdkh\" (UniqueName: \"kubernetes.io/projected/47e8c767-31e1-4609-8c1f-b62577164637-kube-api-access-8fdkh\") pod \"barbican-eedb-account-create-update-wc6wq\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.335197 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fdkh\" (UniqueName: \"kubernetes.io/projected/47e8c767-31e1-4609-8c1f-b62577164637-kube-api-access-8fdkh\") pod \"barbican-eedb-account-create-update-wc6wq\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.338893 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tr99\" (UniqueName: \"kubernetes.io/projected/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-kube-api-access-4tr99\") pod \"neutron-db-create-mdw2w\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.405111 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4d5d960-90ad-4ca1-a874-6903a4d93d90-operator-scripts\") pod \"neutron-736f-account-create-update-jjxjx\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.405325 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhbj8\" (UniqueName: \"kubernetes.io/projected/b4d5d960-90ad-4ca1-a874-6903a4d93d90-kube-api-access-dhbj8\") pod \"neutron-736f-account-create-update-jjxjx\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.458449 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.507014 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4d5d960-90ad-4ca1-a874-6903a4d93d90-operator-scripts\") pod \"neutron-736f-account-create-update-jjxjx\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.507117 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhbj8\" (UniqueName: \"kubernetes.io/projected/b4d5d960-90ad-4ca1-a874-6903a4d93d90-kube-api-access-dhbj8\") pod \"neutron-736f-account-create-update-jjxjx\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.508422 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4d5d960-90ad-4ca1-a874-6903a4d93d90-operator-scripts\") pod \"neutron-736f-account-create-update-jjxjx\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.541588 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhbj8\" (UniqueName: \"kubernetes.io/projected/b4d5d960-90ad-4ca1-a874-6903a4d93d90-kube-api-access-dhbj8\") pod \"neutron-736f-account-create-update-jjxjx\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.558376 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.580904 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.594455 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.616517 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.852555 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.915471 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"526b9328-0f86-4c3d-9a27-116742cee11a","Type":"ContainerStarted","Data":"eaf5d6915ee9d53eade5af688260e4d616441cf10b43a2aadbfd990100f8b881"} Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.920621 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wq5n6-config-gfqbn" event={"ID":"bd31dd40-d30a-4680-b1c3-0886cf7678df","Type":"ContainerDied","Data":"7e562e3d9575810643f68451208d8c7ba1469727e5da7859814a4440a05defe4"} Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.920667 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e562e3d9575810643f68451208d8c7ba1469727e5da7859814a4440a05defe4" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.921936 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.930273 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-scripts\") pod \"bd31dd40-d30a-4680-b1c3-0886cf7678df\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.930482 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run\") pod \"bd31dd40-d30a-4680-b1c3-0886cf7678df\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.930585 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8c5n\" (UniqueName: \"kubernetes.io/projected/bd31dd40-d30a-4680-b1c3-0886cf7678df-kube-api-access-h8c5n\") pod \"bd31dd40-d30a-4680-b1c3-0886cf7678df\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.930604 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-additional-scripts\") pod \"bd31dd40-d30a-4680-b1c3-0886cf7678df\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.930628 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run-ovn\") pod \"bd31dd40-d30a-4680-b1c3-0886cf7678df\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.930751 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-log-ovn\") pod \"bd31dd40-d30a-4680-b1c3-0886cf7678df\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.931448 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bd31dd40-d30a-4680-b1c3-0886cf7678df" (UID: "bd31dd40-d30a-4680-b1c3-0886cf7678df"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.931479 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run" (OuterVolumeSpecName: "var-run") pod "bd31dd40-d30a-4680-b1c3-0886cf7678df" (UID: "bd31dd40-d30a-4680-b1c3-0886cf7678df"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.933257 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bd31dd40-d30a-4680-b1c3-0886cf7678df" (UID: "bd31dd40-d30a-4680-b1c3-0886cf7678df"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.933314 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bd31dd40-d30a-4680-b1c3-0886cf7678df" (UID: "bd31dd40-d30a-4680-b1c3-0886cf7678df"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.933735 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-scripts" (OuterVolumeSpecName: "scripts") pod "bd31dd40-d30a-4680-b1c3-0886cf7678df" (UID: "bd31dd40-d30a-4680-b1c3-0886cf7678df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.945610 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-wnw7q"] Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.955944 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd31dd40-d30a-4680-b1c3-0886cf7678df-kube-api-access-h8c5n" (OuterVolumeSpecName: "kube-api-access-h8c5n") pod "bd31dd40-d30a-4680-b1c3-0886cf7678df" (UID: "bd31dd40-d30a-4680-b1c3-0886cf7678df"). InnerVolumeSpecName "kube-api-access-h8c5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.039840 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8c5n\" (UniqueName: \"kubernetes.io/projected/bd31dd40-d30a-4680-b1c3-0886cf7678df-kube-api-access-h8c5n\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.040162 4761 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.040173 4761 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.040181 4761 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.040190 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.040199 4761 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.191569 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-17dd-account-create-update-fwfjn"] Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.264710 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hbnpl"] Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.395120 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3014-account-create-update-gtc26"] Mar 07 08:11:19 crc kubenswrapper[4761]: W0307 08:11:19.430036 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9894a0c_ae83_4f9b_96c5_4bac5772ad56.slice/crio-34f277fe3d4f13198ef7337e6036062b5c98a21271f32894cb9e14f89258920d WatchSource:0}: Error finding container 34f277fe3d4f13198ef7337e6036062b5c98a21271f32894cb9e14f89258920d: Status 404 returned error can't find the container with id 34f277fe3d4f13198ef7337e6036062b5c98a21271f32894cb9e14f89258920d Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.597761 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-pf6dj"] Mar 07 08:11:19 crc kubenswrapper[4761]: E0307 08:11:19.634866 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd31dd40_d30a_4680_b1c3_0886cf7678df.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.940199 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pf6dj" event={"ID":"92bbc752-8315-47e4-993a-db9de1da8c87","Type":"ContainerStarted","Data":"0f58c4fafff0cb8ab97e33e3ab3d9ce6836a3e4ac1439c19a573c21811185fee"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.940259 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pf6dj" event={"ID":"92bbc752-8315-47e4-993a-db9de1da8c87","Type":"ContainerStarted","Data":"5c0f05af8eabe2860c777a18ea45cbfb508ab0a805a002599decb6f92f94656d"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.950597 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-17dd-account-create-update-fwfjn" event={"ID":"6b359be0-899b-479e-ac6c-1ed4422b7da8","Type":"ContainerStarted","Data":"213af97bbe0e3ae38c1d1515fc22f6b13311545e5a40f677bbee0870e83ed3ae"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.950939 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-17dd-account-create-update-fwfjn" event={"ID":"6b359be0-899b-479e-ac6c-1ed4422b7da8","Type":"ContainerStarted","Data":"9801cbe1cc4bb1b0ec16f5aa1a7f90ea4feb5004c2d6016810b5cbb31c039dc4"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.963579 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hbnpl" event={"ID":"52ac8e30-44e2-48ba-8272-112bb012a7e2","Type":"ContainerStarted","Data":"7dc0901d8bff55c1c74207d6bd5522c5c55621687f407e19c00a0a08ad96732d"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.963628 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hbnpl" event={"ID":"52ac8e30-44e2-48ba-8272-112bb012a7e2","Type":"ContainerStarted","Data":"5b71d89b5a89190e6a686335d1ca623a12f09bd317c93d2865d90e29e29e37f5"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.968512 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-wnw7q" event={"ID":"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b","Type":"ContainerStarted","Data":"f9efffad10394a551925d203976714f1d199a96ec9a9d78778c7d97eb32fec2c"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.968550 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-wnw7q" event={"ID":"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b","Type":"ContainerStarted","Data":"149c23796c936aa420fce8749e5ba3d7121c8ecb122efa24ce630498cef827a2"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.970830 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3014-account-create-update-gtc26" event={"ID":"c9894a0c-ae83-4f9b-96c5-4bac5772ad56","Type":"ContainerStarted","Data":"542f79b9da20217da4609522244e7105c548cdfef4734a40d1dafb1bb2fb8f49"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.970858 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3014-account-create-update-gtc26" event={"ID":"c9894a0c-ae83-4f9b-96c5-4bac5772ad56","Type":"ContainerStarted","Data":"34f277fe3d4f13198ef7337e6036062b5c98a21271f32894cb9e14f89258920d"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.984351 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mdw2w"] Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.001826 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tctqn"] Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.009028 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-eedb-account-create-update-wc6wq"] Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.015774 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-pf6dj" podStartSLOduration=3.015754964 podStartE2EDuration="3.015754964s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:19.984045808 +0000 UTC m=+1336.893212283" watchObservedRunningTime="2026-03-07 08:11:20.015754964 +0000 UTC m=+1336.924921439" Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.041363 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-736f-account-create-update-jjxjx"] Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.064120 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wq5n6-config-gfqbn"] Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.067941 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wq5n6-config-gfqbn"] Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.080244 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-17dd-account-create-update-fwfjn" podStartSLOduration=3.080220383 podStartE2EDuration="3.080220383s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:20.025568793 +0000 UTC m=+1336.934735268" watchObservedRunningTime="2026-03-07 08:11:20.080220383 +0000 UTC m=+1336.989386858" Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.109648 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-wnw7q" podStartSLOduration=3.10962641 podStartE2EDuration="3.10962641s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:20.049337158 +0000 UTC m=+1336.958503633" watchObservedRunningTime="2026-03-07 08:11:20.10962641 +0000 UTC m=+1337.018792885" Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.121427 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-3014-account-create-update-gtc26" podStartSLOduration=3.121408399 podStartE2EDuration="3.121408399s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:20.080598862 +0000 UTC m=+1336.989765347" watchObservedRunningTime="2026-03-07 08:11:20.121408399 +0000 UTC m=+1337.030574874" Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.173237 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-hbnpl" podStartSLOduration=3.173112314 podStartE2EDuration="3.173112314s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:20.117603983 +0000 UTC m=+1337.026770458" watchObservedRunningTime="2026-03-07 08:11:20.173112314 +0000 UTC m=+1337.082278789" Mar 07 08:11:20 crc kubenswrapper[4761]: W0307 08:11:20.487414 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15e98bf9_0ded_4a61_b436_1f652f69e599.slice/crio-bc9b877b12dd12070e42fbfeb8414f61d3722990621e5ffbd19978c87aba1695 WatchSource:0}: Error finding container bc9b877b12dd12070e42fbfeb8414f61d3722990621e5ffbd19978c87aba1695: Status 404 returned error can't find the container with id bc9b877b12dd12070e42fbfeb8414f61d3722990621e5ffbd19978c87aba1695 Mar 07 08:11:20 crc kubenswrapper[4761]: W0307 08:11:20.489203 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2f2f7f1_78f2_41ef_80a6_efa709f0c281.slice/crio-d95aaec2f1bf03a726bbc6053776ea42c2166e2984c1311a2fdc03872be97f65 WatchSource:0}: Error finding container d95aaec2f1bf03a726bbc6053776ea42c2166e2984c1311a2fdc03872be97f65: Status 404 returned error can't find the container with id d95aaec2f1bf03a726bbc6053776ea42c2166e2984c1311a2fdc03872be97f65 Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.981072 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mdw2w" event={"ID":"c2f2f7f1-78f2-41ef-80a6-efa709f0c281","Type":"ContainerStarted","Data":"d95aaec2f1bf03a726bbc6053776ea42c2166e2984c1311a2fdc03872be97f65"} Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.982778 4761 generic.go:334] "Generic (PLEG): container finished" podID="92bbc752-8315-47e4-993a-db9de1da8c87" containerID="0f58c4fafff0cb8ab97e33e3ab3d9ce6836a3e4ac1439c19a573c21811185fee" exitCode=0 Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.982830 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pf6dj" event={"ID":"92bbc752-8315-47e4-993a-db9de1da8c87","Type":"ContainerDied","Data":"0f58c4fafff0cb8ab97e33e3ab3d9ce6836a3e4ac1439c19a573c21811185fee"} Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.984144 4761 generic.go:334] "Generic (PLEG): container finished" podID="52ac8e30-44e2-48ba-8272-112bb012a7e2" containerID="7dc0901d8bff55c1c74207d6bd5522c5c55621687f407e19c00a0a08ad96732d" exitCode=0 Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.984183 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hbnpl" event={"ID":"52ac8e30-44e2-48ba-8272-112bb012a7e2","Type":"ContainerDied","Data":"7dc0901d8bff55c1c74207d6bd5522c5c55621687f407e19c00a0a08ad96732d"} Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.985158 4761 generic.go:334] "Generic (PLEG): container finished" podID="7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b" containerID="f9efffad10394a551925d203976714f1d199a96ec9a9d78778c7d97eb32fec2c" exitCode=0 Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.985195 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-wnw7q" event={"ID":"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b","Type":"ContainerDied","Data":"f9efffad10394a551925d203976714f1d199a96ec9a9d78778c7d97eb32fec2c"} Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.986751 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eedb-account-create-update-wc6wq" event={"ID":"47e8c767-31e1-4609-8c1f-b62577164637","Type":"ContainerStarted","Data":"cb52f8db9693a97ef7f2eace7b04837ac340ae23c1483c69e0fcb55d84fc48f8"} Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.988517 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-736f-account-create-update-jjxjx" event={"ID":"b4d5d960-90ad-4ca1-a874-6903a4d93d90","Type":"ContainerStarted","Data":"d0ad47d9cbc0cef843d5d13d6855be05d24c5693bb26d7b556278ef4e3658d11"} Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.990357 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tctqn" event={"ID":"15e98bf9-0ded-4a61-b436-1f652f69e599","Type":"ContainerStarted","Data":"bc9b877b12dd12070e42fbfeb8414f61d3722990621e5ffbd19978c87aba1695"} Mar 07 08:11:21 crc kubenswrapper[4761]: I0307 08:11:21.179680 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hgl7h"] Mar 07 08:11:21 crc kubenswrapper[4761]: I0307 08:11:21.188831 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hgl7h"] Mar 07 08:11:21 crc kubenswrapper[4761]: I0307 08:11:21.720110 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8" path="/var/lib/kubelet/pods/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8/volumes" Mar 07 08:11:21 crc kubenswrapper[4761]: I0307 08:11:21.721404 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd31dd40-d30a-4680-b1c3-0886cf7678df" path="/var/lib/kubelet/pods/bd31dd40-d30a-4680-b1c3-0886cf7678df/volumes" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.002822 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-736f-account-create-update-jjxjx" event={"ID":"b4d5d960-90ad-4ca1-a874-6903a4d93d90","Type":"ContainerStarted","Data":"b305d8cec5e50079f6c2ae9f3ecf5ce4a21203d5c8c4e48dd5c5f168bcb4870f"} Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.008928 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"d42a04afb307f3d04f7558d016e8fc17dee6984dfd4b3332944366670509fba3"} Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.008983 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"ec84753a286613acd68913a1676e810500e0cc574a35aeae42c014bd0189821f"} Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.010313 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mdw2w" event={"ID":"c2f2f7f1-78f2-41ef-80a6-efa709f0c281","Type":"ContainerStarted","Data":"9388e27b172f2bb94960bcb3ae75f0505a3ee7ade70af79044c0ce8363c56503"} Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.012290 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eedb-account-create-update-wc6wq" event={"ID":"47e8c767-31e1-4609-8c1f-b62577164637","Type":"ContainerStarted","Data":"552ada8980f0b2062dc812b73b1d81fa326f40eda6c62f34bd26a1ce3804cc8d"} Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.024400 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-736f-account-create-update-jjxjx" podStartSLOduration=4.024376047 podStartE2EDuration="4.024376047s" podCreationTimestamp="2026-03-07 08:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:22.01664707 +0000 UTC m=+1338.925813555" watchObservedRunningTime="2026-03-07 08:11:22.024376047 +0000 UTC m=+1338.933542522" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.037567 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-mdw2w" podStartSLOduration=5.037546032 podStartE2EDuration="5.037546032s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:22.029886347 +0000 UTC m=+1338.939052822" watchObservedRunningTime="2026-03-07 08:11:22.037546032 +0000 UTC m=+1338.946712507" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.063142 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-eedb-account-create-update-wc6wq" podStartSLOduration=5.062799864 podStartE2EDuration="5.062799864s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:22.055233161 +0000 UTC m=+1338.964399626" watchObservedRunningTime="2026-03-07 08:11:22.062799864 +0000 UTC m=+1338.971966339" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.390427 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.577083 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcj6z\" (UniqueName: \"kubernetes.io/projected/92bbc752-8315-47e4-993a-db9de1da8c87-kube-api-access-kcj6z\") pod \"92bbc752-8315-47e4-993a-db9de1da8c87\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.577537 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bbc752-8315-47e4-993a-db9de1da8c87-operator-scripts\") pod \"92bbc752-8315-47e4-993a-db9de1da8c87\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.586569 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92bbc752-8315-47e4-993a-db9de1da8c87-kube-api-access-kcj6z" (OuterVolumeSpecName: "kube-api-access-kcj6z") pod "92bbc752-8315-47e4-993a-db9de1da8c87" (UID: "92bbc752-8315-47e4-993a-db9de1da8c87"). InnerVolumeSpecName "kube-api-access-kcj6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.592238 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92bbc752-8315-47e4-993a-db9de1da8c87-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92bbc752-8315-47e4-993a-db9de1da8c87" (UID: "92bbc752-8315-47e4-993a-db9de1da8c87"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.690869 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bbc752-8315-47e4-993a-db9de1da8c87-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.690898 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcj6z\" (UniqueName: \"kubernetes.io/projected/92bbc752-8315-47e4-993a-db9de1da8c87-kube-api-access-kcj6z\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.761264 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.769533 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.896984 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knkv2\" (UniqueName: \"kubernetes.io/projected/52ac8e30-44e2-48ba-8272-112bb012a7e2-kube-api-access-knkv2\") pod \"52ac8e30-44e2-48ba-8272-112bb012a7e2\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.897262 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52ac8e30-44e2-48ba-8272-112bb012a7e2-operator-scripts\") pod \"52ac8e30-44e2-48ba-8272-112bb012a7e2\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.897314 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-operator-scripts\") pod \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.897343 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqd8c\" (UniqueName: \"kubernetes.io/projected/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-kube-api-access-jqd8c\") pod \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.897795 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52ac8e30-44e2-48ba-8272-112bb012a7e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52ac8e30-44e2-48ba-8272-112bb012a7e2" (UID: "52ac8e30-44e2-48ba-8272-112bb012a7e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.897901 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52ac8e30-44e2-48ba-8272-112bb012a7e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.899125 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b" (UID: "7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.906103 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ac8e30-44e2-48ba-8272-112bb012a7e2-kube-api-access-knkv2" (OuterVolumeSpecName: "kube-api-access-knkv2") pod "52ac8e30-44e2-48ba-8272-112bb012a7e2" (UID: "52ac8e30-44e2-48ba-8272-112bb012a7e2"). InnerVolumeSpecName "kube-api-access-knkv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.907135 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-kube-api-access-jqd8c" (OuterVolumeSpecName: "kube-api-access-jqd8c") pod "7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b" (UID: "7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b"). InnerVolumeSpecName "kube-api-access-jqd8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.999249 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.999285 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqd8c\" (UniqueName: \"kubernetes.io/projected/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-kube-api-access-jqd8c\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.999297 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knkv2\" (UniqueName: \"kubernetes.io/projected/52ac8e30-44e2-48ba-8272-112bb012a7e2-kube-api-access-knkv2\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.023887 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pf6dj" event={"ID":"92bbc752-8315-47e4-993a-db9de1da8c87","Type":"ContainerDied","Data":"5c0f05af8eabe2860c777a18ea45cbfb508ab0a805a002599decb6f92f94656d"} Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.023927 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c0f05af8eabe2860c777a18ea45cbfb508ab0a805a002599decb6f92f94656d" Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.024991 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.025793 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hbnpl" event={"ID":"52ac8e30-44e2-48ba-8272-112bb012a7e2","Type":"ContainerDied","Data":"5b71d89b5a89190e6a686335d1ca623a12f09bd317c93d2865d90e29e29e37f5"} Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.025813 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.025824 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b71d89b5a89190e6a686335d1ca623a12f09bd317c93d2865d90e29e29e37f5" Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.027180 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-wnw7q" event={"ID":"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b","Type":"ContainerDied","Data":"149c23796c936aa420fce8749e5ba3d7121c8ecb122efa24ce630498cef827a2"} Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.027224 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="149c23796c936aa420fce8749e5ba3d7121c8ecb122efa24ce630498cef827a2" Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.027241 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.032552 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"1107c3172a290bc401db762bd99cc679688e5d62345277e673fb06c908b58f1a"} Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.041079 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"526b9328-0f86-4c3d-9a27-116742cee11a","Type":"ContainerStarted","Data":"3325039d2caf09b80948b1ba9679f735f2f9a2591ad450dfec9bcc1a57c99c15"} Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.041204 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"526b9328-0f86-4c3d-9a27-116742cee11a","Type":"ContainerStarted","Data":"c64d995b6e878fcf1229303f0d4b8c12776e9e0292f88ab466a602fb75396478"} Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.043850 4761 generic.go:334] "Generic (PLEG): container finished" podID="c2f2f7f1-78f2-41ef-80a6-efa709f0c281" containerID="9388e27b172f2bb94960bcb3ae75f0505a3ee7ade70af79044c0ce8363c56503" exitCode=0 Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.044855 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mdw2w" event={"ID":"c2f2f7f1-78f2-41ef-80a6-efa709f0c281","Type":"ContainerDied","Data":"9388e27b172f2bb94960bcb3ae75f0505a3ee7ade70af79044c0ce8363c56503"} Mar 07 08:11:24 crc kubenswrapper[4761]: I0307 08:11:24.060120 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"b0a93269f08352666c9c91b9e554a56a00107cdc66214fe7636965cc504800ff"} Mar 07 08:11:24 crc kubenswrapper[4761]: I0307 08:11:24.060522 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"c59dda5fbe1de25dcc41b6815bfee3f80c301cba8fca36f936cb6ac1d5959a75"} Mar 07 08:11:24 crc kubenswrapper[4761]: I0307 08:11:24.062293 4761 generic.go:334] "Generic (PLEG): container finished" podID="6b359be0-899b-479e-ac6c-1ed4422b7da8" containerID="213af97bbe0e3ae38c1d1515fc22f6b13311545e5a40f677bbee0870e83ed3ae" exitCode=0 Mar 07 08:11:24 crc kubenswrapper[4761]: I0307 08:11:24.062377 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-17dd-account-create-update-fwfjn" event={"ID":"6b359be0-899b-479e-ac6c-1ed4422b7da8","Type":"ContainerDied","Data":"213af97bbe0e3ae38c1d1515fc22f6b13311545e5a40f677bbee0870e83ed3ae"} Mar 07 08:11:24 crc kubenswrapper[4761]: I0307 08:11:24.109742 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.109705728 podStartE2EDuration="18.109705728s" podCreationTimestamp="2026-03-07 08:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:24.097360585 +0000 UTC m=+1341.006527060" watchObservedRunningTime="2026-03-07 08:11:24.109705728 +0000 UTC m=+1341.018872213" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.191892 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xhpdg"] Mar 07 08:11:26 crc kubenswrapper[4761]: E0307 08:11:26.192849 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bbc752-8315-47e4-993a-db9de1da8c87" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.192867 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bbc752-8315-47e4-993a-db9de1da8c87" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: E0307 08:11:26.192888 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.192894 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: E0307 08:11:26.192914 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ac8e30-44e2-48ba-8272-112bb012a7e2" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.192921 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ac8e30-44e2-48ba-8272-112bb012a7e2" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: E0307 08:11:26.192935 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd31dd40-d30a-4680-b1c3-0886cf7678df" containerName="ovn-config" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.192940 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd31dd40-d30a-4680-b1c3-0886cf7678df" containerName="ovn-config" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.193125 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.193139 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd31dd40-d30a-4680-b1c3-0886cf7678df" containerName="ovn-config" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.193154 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ac8e30-44e2-48ba-8272-112bb012a7e2" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.193174 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="92bbc752-8315-47e4-993a-db9de1da8c87" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.193939 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.196036 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.207587 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xhpdg"] Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.279471 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ntpc\" (UniqueName: \"kubernetes.io/projected/573aa590-eee5-4f25-80ba-8bcf0a712d6f-kube-api-access-8ntpc\") pod \"root-account-create-update-xhpdg\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.279914 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/573aa590-eee5-4f25-80ba-8bcf0a712d6f-operator-scripts\") pod \"root-account-create-update-xhpdg\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.382037 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ntpc\" (UniqueName: \"kubernetes.io/projected/573aa590-eee5-4f25-80ba-8bcf0a712d6f-kube-api-access-8ntpc\") pod \"root-account-create-update-xhpdg\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.382098 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/573aa590-eee5-4f25-80ba-8bcf0a712d6f-operator-scripts\") pod \"root-account-create-update-xhpdg\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.383187 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/573aa590-eee5-4f25-80ba-8bcf0a712d6f-operator-scripts\") pod \"root-account-create-update-xhpdg\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.401446 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ntpc\" (UniqueName: \"kubernetes.io/projected/573aa590-eee5-4f25-80ba-8bcf0a712d6f-kube-api-access-8ntpc\") pod \"root-account-create-update-xhpdg\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.559646 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:27 crc kubenswrapper[4761]: I0307 08:11:27.398772 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.053363 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.058338 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.125493 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b359be0-899b-479e-ac6c-1ed4422b7da8-operator-scripts\") pod \"6b359be0-899b-479e-ac6c-1ed4422b7da8\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.125614 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-operator-scripts\") pod \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.125788 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tr99\" (UniqueName: \"kubernetes.io/projected/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-kube-api-access-4tr99\") pod \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.125820 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz8zl\" (UniqueName: \"kubernetes.io/projected/6b359be0-899b-479e-ac6c-1ed4422b7da8-kube-api-access-pz8zl\") pod \"6b359be0-899b-479e-ac6c-1ed4422b7da8\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.128845 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b359be0-899b-479e-ac6c-1ed4422b7da8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b359be0-899b-479e-ac6c-1ed4422b7da8" (UID: "6b359be0-899b-479e-ac6c-1ed4422b7da8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.129105 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c2f2f7f1-78f2-41ef-80a6-efa709f0c281" (UID: "c2f2f7f1-78f2-41ef-80a6-efa709f0c281"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.147437 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-kube-api-access-4tr99" (OuterVolumeSpecName: "kube-api-access-4tr99") pod "c2f2f7f1-78f2-41ef-80a6-efa709f0c281" (UID: "c2f2f7f1-78f2-41ef-80a6-efa709f0c281"). InnerVolumeSpecName "kube-api-access-4tr99". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.148299 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b359be0-899b-479e-ac6c-1ed4422b7da8-kube-api-access-pz8zl" (OuterVolumeSpecName: "kube-api-access-pz8zl") pod "6b359be0-899b-479e-ac6c-1ed4422b7da8" (UID: "6b359be0-899b-479e-ac6c-1ed4422b7da8"). InnerVolumeSpecName "kube-api-access-pz8zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.154352 4761 generic.go:334] "Generic (PLEG): container finished" podID="c9894a0c-ae83-4f9b-96c5-4bac5772ad56" containerID="542f79b9da20217da4609522244e7105c548cdfef4734a40d1dafb1bb2fb8f49" exitCode=0 Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.154402 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3014-account-create-update-gtc26" event={"ID":"c9894a0c-ae83-4f9b-96c5-4bac5772ad56","Type":"ContainerDied","Data":"542f79b9da20217da4609522244e7105c548cdfef4734a40d1dafb1bb2fb8f49"} Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.156070 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mdw2w" event={"ID":"c2f2f7f1-78f2-41ef-80a6-efa709f0c281","Type":"ContainerDied","Data":"d95aaec2f1bf03a726bbc6053776ea42c2166e2984c1311a2fdc03872be97f65"} Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.156091 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d95aaec2f1bf03a726bbc6053776ea42c2166e2984c1311a2fdc03872be97f65" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.156166 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.164884 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-17dd-account-create-update-fwfjn" event={"ID":"6b359be0-899b-479e-ac6c-1ed4422b7da8","Type":"ContainerDied","Data":"9801cbe1cc4bb1b0ec16f5aa1a7f90ea4feb5004c2d6016810b5cbb31c039dc4"} Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.164912 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9801cbe1cc4bb1b0ec16f5aa1a7f90ea4feb5004c2d6016810b5cbb31c039dc4" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.164955 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.228008 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tr99\" (UniqueName: \"kubernetes.io/projected/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-kube-api-access-4tr99\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.228041 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz8zl\" (UniqueName: \"kubernetes.io/projected/6b359be0-899b-479e-ac6c-1ed4422b7da8-kube-api-access-pz8zl\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.228052 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b359be0-899b-479e-ac6c-1ed4422b7da8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.228060 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.421744 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xhpdg"] Mar 07 08:11:28 crc kubenswrapper[4761]: W0307 08:11:28.428136 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod573aa590_eee5_4f25_80ba_8bcf0a712d6f.slice/crio-721c79a12779385191043aa361499e60f93fda39c160e8a955777f11c81ed849 WatchSource:0}: Error finding container 721c79a12779385191043aa361499e60f93fda39c160e8a955777f11c81ed849: Status 404 returned error can't find the container with id 721c79a12779385191043aa361499e60f93fda39c160e8a955777f11c81ed849 Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.179967 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"73234239ef64cd3a1d92f8eefa8dfa12a7c11750670a840c2d92a239876a2200"} Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.181841 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhpdg" event={"ID":"573aa590-eee5-4f25-80ba-8bcf0a712d6f","Type":"ContainerStarted","Data":"f625278ad061e03435fe6dd38c6b918071ccbe277752ebf56038dc3f252be709"} Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.181940 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhpdg" event={"ID":"573aa590-eee5-4f25-80ba-8bcf0a712d6f","Type":"ContainerStarted","Data":"721c79a12779385191043aa361499e60f93fda39c160e8a955777f11c81ed849"} Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.203704 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xhpdg" podStartSLOduration=3.20367708 podStartE2EDuration="3.20367708s" podCreationTimestamp="2026-03-07 08:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:29.202134461 +0000 UTC m=+1346.111300936" watchObservedRunningTime="2026-03-07 08:11:29.20367708 +0000 UTC m=+1346.112843585" Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.796793 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.863008 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcw9w\" (UniqueName: \"kubernetes.io/projected/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-kube-api-access-tcw9w\") pod \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.863308 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-operator-scripts\") pod \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.866006 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9894a0c-ae83-4f9b-96c5-4bac5772ad56" (UID: "c9894a0c-ae83-4f9b-96c5-4bac5772ad56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.875954 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-kube-api-access-tcw9w" (OuterVolumeSpecName: "kube-api-access-tcw9w") pod "c9894a0c-ae83-4f9b-96c5-4bac5772ad56" (UID: "c9894a0c-ae83-4f9b-96c5-4bac5772ad56"). InnerVolumeSpecName "kube-api-access-tcw9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.966331 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.966941 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcw9w\" (UniqueName: \"kubernetes.io/projected/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-kube-api-access-tcw9w\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.197082 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.197521 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3014-account-create-update-gtc26" event={"ID":"c9894a0c-ae83-4f9b-96c5-4bac5772ad56","Type":"ContainerDied","Data":"34f277fe3d4f13198ef7337e6036062b5c98a21271f32894cb9e14f89258920d"} Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.197558 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34f277fe3d4f13198ef7337e6036062b5c98a21271f32894cb9e14f89258920d" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.204386 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"c142c53e890d72d6b360db9e262996b7735b840f7f47b8d7abba8232433657de"} Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.213664 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tctqn" event={"ID":"15e98bf9-0ded-4a61-b436-1f652f69e599","Type":"ContainerStarted","Data":"c03ac32aaa97dba1c311494ead8833dd468ddd521d71d0daa9a777f906ff04e3"} Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.215887 4761 generic.go:334] "Generic (PLEG): container finished" podID="47e8c767-31e1-4609-8c1f-b62577164637" containerID="552ada8980f0b2062dc812b73b1d81fa326f40eda6c62f34bd26a1ce3804cc8d" exitCode=0 Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.215970 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eedb-account-create-update-wc6wq" event={"ID":"47e8c767-31e1-4609-8c1f-b62577164637","Type":"ContainerDied","Data":"552ada8980f0b2062dc812b73b1d81fa326f40eda6c62f34bd26a1ce3804cc8d"} Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.217983 4761 generic.go:334] "Generic (PLEG): container finished" podID="573aa590-eee5-4f25-80ba-8bcf0a712d6f" containerID="f625278ad061e03435fe6dd38c6b918071ccbe277752ebf56038dc3f252be709" exitCode=0 Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.218056 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhpdg" event={"ID":"573aa590-eee5-4f25-80ba-8bcf0a712d6f","Type":"ContainerDied","Data":"f625278ad061e03435fe6dd38c6b918071ccbe277752ebf56038dc3f252be709"} Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.223460 4761 generic.go:334] "Generic (PLEG): container finished" podID="b4d5d960-90ad-4ca1-a874-6903a4d93d90" containerID="b305d8cec5e50079f6c2ae9f3ecf5ce4a21203d5c8c4e48dd5c5f168bcb4870f" exitCode=0 Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.223529 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-736f-account-create-update-jjxjx" event={"ID":"b4d5d960-90ad-4ca1-a874-6903a4d93d90","Type":"ContainerDied","Data":"b305d8cec5e50079f6c2ae9f3ecf5ce4a21203d5c8c4e48dd5c5f168bcb4870f"} Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.275363 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=44.021763179 podStartE2EDuration="1m0.275343518s" podCreationTimestamp="2026-03-07 08:10:30 +0000 UTC" firstStartedPulling="2026-03-07 08:11:04.331359732 +0000 UTC m=+1321.240526207" lastFinishedPulling="2026-03-07 08:11:20.584940071 +0000 UTC m=+1337.494106546" observedRunningTime="2026-03-07 08:11:30.274813255 +0000 UTC m=+1347.183979720" watchObservedRunningTime="2026-03-07 08:11:30.275343518 +0000 UTC m=+1347.184510003" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.388690 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-tctqn" podStartSLOduration=4.40164324 podStartE2EDuration="13.388671609s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="2026-03-07 08:11:20.489248179 +0000 UTC m=+1337.398414654" lastFinishedPulling="2026-03-07 08:11:29.476276548 +0000 UTC m=+1346.385443023" observedRunningTime="2026-03-07 08:11:30.379836434 +0000 UTC m=+1347.289002909" watchObservedRunningTime="2026-03-07 08:11:30.388671609 +0000 UTC m=+1347.297838084" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.731421 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cbkln"] Mar 07 08:11:30 crc kubenswrapper[4761]: E0307 08:11:30.732418 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f2f7f1-78f2-41ef-80a6-efa709f0c281" containerName="mariadb-database-create" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.732438 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f2f7f1-78f2-41ef-80a6-efa709f0c281" containerName="mariadb-database-create" Mar 07 08:11:30 crc kubenswrapper[4761]: E0307 08:11:30.732460 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b359be0-899b-479e-ac6c-1ed4422b7da8" containerName="mariadb-account-create-update" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.732467 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b359be0-899b-479e-ac6c-1ed4422b7da8" containerName="mariadb-account-create-update" Mar 07 08:11:30 crc kubenswrapper[4761]: E0307 08:11:30.732475 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9894a0c-ae83-4f9b-96c5-4bac5772ad56" containerName="mariadb-account-create-update" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.732481 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9894a0c-ae83-4f9b-96c5-4bac5772ad56" containerName="mariadb-account-create-update" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.732780 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9894a0c-ae83-4f9b-96c5-4bac5772ad56" containerName="mariadb-account-create-update" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.732800 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2f2f7f1-78f2-41ef-80a6-efa709f0c281" containerName="mariadb-database-create" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.732817 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b359be0-899b-479e-ac6c-1ed4422b7da8" containerName="mariadb-account-create-update" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.733793 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.735784 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.760094 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cbkln"] Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.787858 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.788102 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-svc\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.788214 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.788332 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.789268 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-config\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.789759 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmwlb\" (UniqueName: \"kubernetes.io/projected/da5c6e23-f4e6-4c38-8801-89453ef0b91a-kube-api-access-wmwlb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.891883 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-config\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.891975 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmwlb\" (UniqueName: \"kubernetes.io/projected/da5c6e23-f4e6-4c38-8801-89453ef0b91a-kube-api-access-wmwlb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.892055 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.892093 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-svc\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.892121 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.892181 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.893076 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-config\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.893092 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.894163 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.894591 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-svc\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.895130 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.921034 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmwlb\" (UniqueName: \"kubernetes.io/projected/da5c6e23-f4e6-4c38-8801-89453ef0b91a-kube-api-access-wmwlb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.062607 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.552741 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cbkln"] Mar 07 08:11:31 crc kubenswrapper[4761]: W0307 08:11:31.581516 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda5c6e23_f4e6_4c38_8801_89453ef0b91a.slice/crio-3cbd8d27644c5a92e8a51f2f09491a39423c7c677204bd57749d30ef21c68d85 WatchSource:0}: Error finding container 3cbd8d27644c5a92e8a51f2f09491a39423c7c677204bd57749d30ef21c68d85: Status 404 returned error can't find the container with id 3cbd8d27644c5a92e8a51f2f09491a39423c7c677204bd57749d30ef21c68d85 Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.730268 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.803863 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.822874 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.919702 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fdkh\" (UniqueName: \"kubernetes.io/projected/47e8c767-31e1-4609-8c1f-b62577164637-kube-api-access-8fdkh\") pod \"47e8c767-31e1-4609-8c1f-b62577164637\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.919893 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhbj8\" (UniqueName: \"kubernetes.io/projected/b4d5d960-90ad-4ca1-a874-6903a4d93d90-kube-api-access-dhbj8\") pod \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.919959 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47e8c767-31e1-4609-8c1f-b62577164637-operator-scripts\") pod \"47e8c767-31e1-4609-8c1f-b62577164637\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.920055 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/573aa590-eee5-4f25-80ba-8bcf0a712d6f-operator-scripts\") pod \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.920086 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4d5d960-90ad-4ca1-a874-6903a4d93d90-operator-scripts\") pod \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.920162 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ntpc\" (UniqueName: \"kubernetes.io/projected/573aa590-eee5-4f25-80ba-8bcf0a712d6f-kube-api-access-8ntpc\") pod \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.923612 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/573aa590-eee5-4f25-80ba-8bcf0a712d6f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "573aa590-eee5-4f25-80ba-8bcf0a712d6f" (UID: "573aa590-eee5-4f25-80ba-8bcf0a712d6f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.923672 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4d5d960-90ad-4ca1-a874-6903a4d93d90-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4d5d960-90ad-4ca1-a874-6903a4d93d90" (UID: "b4d5d960-90ad-4ca1-a874-6903a4d93d90"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.923644 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47e8c767-31e1-4609-8c1f-b62577164637-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47e8c767-31e1-4609-8c1f-b62577164637" (UID: "47e8c767-31e1-4609-8c1f-b62577164637"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.925334 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47e8c767-31e1-4609-8c1f-b62577164637-kube-api-access-8fdkh" (OuterVolumeSpecName: "kube-api-access-8fdkh") pod "47e8c767-31e1-4609-8c1f-b62577164637" (UID: "47e8c767-31e1-4609-8c1f-b62577164637"). InnerVolumeSpecName "kube-api-access-8fdkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.925552 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/573aa590-eee5-4f25-80ba-8bcf0a712d6f-kube-api-access-8ntpc" (OuterVolumeSpecName: "kube-api-access-8ntpc") pod "573aa590-eee5-4f25-80ba-8bcf0a712d6f" (UID: "573aa590-eee5-4f25-80ba-8bcf0a712d6f"). InnerVolumeSpecName "kube-api-access-8ntpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.926798 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d5d960-90ad-4ca1-a874-6903a4d93d90-kube-api-access-dhbj8" (OuterVolumeSpecName: "kube-api-access-dhbj8") pod "b4d5d960-90ad-4ca1-a874-6903a4d93d90" (UID: "b4d5d960-90ad-4ca1-a874-6903a4d93d90"). InnerVolumeSpecName "kube-api-access-dhbj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.022413 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/573aa590-eee5-4f25-80ba-8bcf0a712d6f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.022446 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4d5d960-90ad-4ca1-a874-6903a4d93d90-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.022456 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ntpc\" (UniqueName: \"kubernetes.io/projected/573aa590-eee5-4f25-80ba-8bcf0a712d6f-kube-api-access-8ntpc\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.022466 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fdkh\" (UniqueName: \"kubernetes.io/projected/47e8c767-31e1-4609-8c1f-b62577164637-kube-api-access-8fdkh\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.022475 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhbj8\" (UniqueName: \"kubernetes.io/projected/b4d5d960-90ad-4ca1-a874-6903a4d93d90-kube-api-access-dhbj8\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.022484 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47e8c767-31e1-4609-8c1f-b62577164637-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.262471 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.262465 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-736f-account-create-update-jjxjx" event={"ID":"b4d5d960-90ad-4ca1-a874-6903a4d93d90","Type":"ContainerDied","Data":"d0ad47d9cbc0cef843d5d13d6855be05d24c5693bb26d7b556278ef4e3658d11"} Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.263885 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0ad47d9cbc0cef843d5d13d6855be05d24c5693bb26d7b556278ef4e3658d11" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.269161 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eedb-account-create-update-wc6wq" event={"ID":"47e8c767-31e1-4609-8c1f-b62577164637","Type":"ContainerDied","Data":"cb52f8db9693a97ef7f2eace7b04837ac340ae23c1483c69e0fcb55d84fc48f8"} Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.269224 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb52f8db9693a97ef7f2eace7b04837ac340ae23c1483c69e0fcb55d84fc48f8" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.269169 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.272522 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhpdg" event={"ID":"573aa590-eee5-4f25-80ba-8bcf0a712d6f","Type":"ContainerDied","Data":"721c79a12779385191043aa361499e60f93fda39c160e8a955777f11c81ed849"} Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.272571 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="721c79a12779385191043aa361499e60f93fda39c160e8a955777f11c81ed849" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.272659 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.281392 4761 generic.go:334] "Generic (PLEG): container finished" podID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerID="14da72442b10d10498c7e858f38e4f1fb7e091af17ae06c7023832789db215bc" exitCode=0 Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.281468 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" event={"ID":"da5c6e23-f4e6-4c38-8801-89453ef0b91a","Type":"ContainerDied","Data":"14da72442b10d10498c7e858f38e4f1fb7e091af17ae06c7023832789db215bc"} Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.281563 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" event={"ID":"da5c6e23-f4e6-4c38-8801-89453ef0b91a","Type":"ContainerStarted","Data":"3cbd8d27644c5a92e8a51f2f09491a39423c7c677204bd57749d30ef21c68d85"} Mar 07 08:11:33 crc kubenswrapper[4761]: I0307 08:11:33.291678 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" event={"ID":"da5c6e23-f4e6-4c38-8801-89453ef0b91a","Type":"ContainerStarted","Data":"8da4058576fe488581a9362856eb7b6903ab379a61631bef26d1ff5e20139718"} Mar 07 08:11:33 crc kubenswrapper[4761]: I0307 08:11:33.293259 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:33 crc kubenswrapper[4761]: I0307 08:11:33.294134 4761 generic.go:334] "Generic (PLEG): container finished" podID="15e98bf9-0ded-4a61-b436-1f652f69e599" containerID="c03ac32aaa97dba1c311494ead8833dd468ddd521d71d0daa9a777f906ff04e3" exitCode=0 Mar 07 08:11:33 crc kubenswrapper[4761]: I0307 08:11:33.294180 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tctqn" event={"ID":"15e98bf9-0ded-4a61-b436-1f652f69e599","Type":"ContainerDied","Data":"c03ac32aaa97dba1c311494ead8833dd468ddd521d71d0daa9a777f906ff04e3"} Mar 07 08:11:33 crc kubenswrapper[4761]: I0307 08:11:33.314772 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" podStartSLOduration=3.314741349 podStartE2EDuration="3.314741349s" podCreationTimestamp="2026-03-07 08:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:33.308637074 +0000 UTC m=+1350.217803559" watchObservedRunningTime="2026-03-07 08:11:33.314741349 +0000 UTC m=+1350.223907824" Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.730094 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.880642 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-combined-ca-bundle\") pod \"15e98bf9-0ded-4a61-b436-1f652f69e599\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.882405 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-config-data\") pod \"15e98bf9-0ded-4a61-b436-1f652f69e599\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.882696 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p676r\" (UniqueName: \"kubernetes.io/projected/15e98bf9-0ded-4a61-b436-1f652f69e599-kube-api-access-p676r\") pod \"15e98bf9-0ded-4a61-b436-1f652f69e599\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.888071 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e98bf9-0ded-4a61-b436-1f652f69e599-kube-api-access-p676r" (OuterVolumeSpecName: "kube-api-access-p676r") pod "15e98bf9-0ded-4a61-b436-1f652f69e599" (UID: "15e98bf9-0ded-4a61-b436-1f652f69e599"). InnerVolumeSpecName "kube-api-access-p676r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.922827 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15e98bf9-0ded-4a61-b436-1f652f69e599" (UID: "15e98bf9-0ded-4a61-b436-1f652f69e599"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.955795 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-config-data" (OuterVolumeSpecName: "config-data") pod "15e98bf9-0ded-4a61-b436-1f652f69e599" (UID: "15e98bf9-0ded-4a61-b436-1f652f69e599"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.986487 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p676r\" (UniqueName: \"kubernetes.io/projected/15e98bf9-0ded-4a61-b436-1f652f69e599-kube-api-access-p676r\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.986550 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.986569 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.322667 4761 generic.go:334] "Generic (PLEG): container finished" podID="a990e713-634f-47c4-acbe-980ed66d30fe" containerID="fe7c46f93fcb404a48fdfddcf53140cbe34999481e23b77955840ad956bcf535" exitCode=0 Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.322765 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-g9w2m" event={"ID":"a990e713-634f-47c4-acbe-980ed66d30fe","Type":"ContainerDied","Data":"fe7c46f93fcb404a48fdfddcf53140cbe34999481e23b77955840ad956bcf535"} Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.324608 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.324616 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tctqn" event={"ID":"15e98bf9-0ded-4a61-b436-1f652f69e599","Type":"ContainerDied","Data":"bc9b877b12dd12070e42fbfeb8414f61d3722990621e5ffbd19978c87aba1695"} Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.324680 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc9b877b12dd12070e42fbfeb8414f61d3722990621e5ffbd19978c87aba1695" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.628913 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cbkln"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.677320 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vntzs"] Mar 07 08:11:35 crc kubenswrapper[4761]: E0307 08:11:35.689575 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573aa590-eee5-4f25-80ba-8bcf0a712d6f" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.689836 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="573aa590-eee5-4f25-80ba-8bcf0a712d6f" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: E0307 08:11:35.707551 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e98bf9-0ded-4a61-b436-1f652f69e599" containerName="keystone-db-sync" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.707599 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e98bf9-0ded-4a61-b436-1f652f69e599" containerName="keystone-db-sync" Mar 07 08:11:35 crc kubenswrapper[4761]: E0307 08:11:35.707630 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d5d960-90ad-4ca1-a874-6903a4d93d90" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.707637 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d5d960-90ad-4ca1-a874-6903a4d93d90" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: E0307 08:11:35.707676 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e8c767-31e1-4609-8c1f-b62577164637" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.707683 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e8c767-31e1-4609-8c1f-b62577164637" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.708557 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e98bf9-0ded-4a61-b436-1f652f69e599" containerName="keystone-db-sync" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.708584 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e8c767-31e1-4609-8c1f-b62577164637" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.708601 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="573aa590-eee5-4f25-80ba-8bcf0a712d6f" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.708624 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d5d960-90ad-4ca1-a874-6903a4d93d90" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.716886 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.742073 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.744684 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.745606 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.746064 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.760171 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pgh8w" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.814484 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vntzs"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.829913 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-h5khg"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.839411 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.845827 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-h5khg"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.857124 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-92qzx"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.858560 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.861481 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-8k8rs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.864753 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.869610 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-92qzx"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885696 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-combined-ca-bundle\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885749 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq47r\" (UniqueName: \"kubernetes.io/projected/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-kube-api-access-pq47r\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885769 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-credential-keys\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885792 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-config-data\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885815 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-svc\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885832 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llpdc\" (UniqueName: \"kubernetes.io/projected/6af89ced-7c28-41a8-9446-c90f8951bd84-kube-api-access-llpdc\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885865 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66pdh\" (UniqueName: \"kubernetes.io/projected/dce2c706-6c24-4be8-b347-90448de8aaf9-kube-api-access-66pdh\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885885 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-scripts\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885912 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885940 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885980 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885995 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-fernet-keys\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.886022 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-combined-ca-bundle\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.886040 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-config\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.886101 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-config-data\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.935145 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vthx6"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.937883 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.942103 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-42xrl" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.942663 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.943005 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.949324 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vthx6"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.977929 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-d9psc"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.987946 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.989869 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-config-data\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.989991 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-svc\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990104 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llpdc\" (UniqueName: \"kubernetes.io/projected/6af89ced-7c28-41a8-9446-c90f8951bd84-kube-api-access-llpdc\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990211 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66pdh\" (UniqueName: \"kubernetes.io/projected/dce2c706-6c24-4be8-b347-90448de8aaf9-kube-api-access-66pdh\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990299 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-scripts\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990378 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990471 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990569 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990642 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-fernet-keys\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990805 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-combined-ca-bundle\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990925 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-config\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.991081 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-config-data\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.991167 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdhth\" (UniqueName: \"kubernetes.io/projected/0aa749a9-f668-4927-8a9a-28df83640ac4-kube-api-access-xdhth\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.991243 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-config\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.991362 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-combined-ca-bundle\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.991470 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-combined-ca-bundle\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.991568 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq47r\" (UniqueName: \"kubernetes.io/projected/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-kube-api-access-pq47r\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.991684 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-credential-keys\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.000155 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-svc\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.000808 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.001881 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.002493 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-config\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.004941 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.013000 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-credential-keys\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.016031 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-config-data\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.016344 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-scripts\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.023704 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pnxzw" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.023994 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.024113 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.029388 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-fernet-keys\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.030339 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-config-data\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.031316 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-combined-ca-bundle\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.035336 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-combined-ca-bundle\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.079816 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66pdh\" (UniqueName: \"kubernetes.io/projected/dce2c706-6c24-4be8-b347-90448de8aaf9-kube-api-access-66pdh\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.080330 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq47r\" (UniqueName: \"kubernetes.io/projected/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-kube-api-access-pq47r\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095383 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-combined-ca-bundle\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095621 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/782631b9-e01d-424c-af31-3471bfdf1587-etc-machine-id\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095724 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-scripts\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095740 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-db-sync-config-data\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095759 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-config-data\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095780 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxs4p\" (UniqueName: \"kubernetes.io/projected/782631b9-e01d-424c-af31-3471bfdf1587-kube-api-access-hxs4p\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095800 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdhth\" (UniqueName: \"kubernetes.io/projected/0aa749a9-f668-4927-8a9a-28df83640ac4-kube-api-access-xdhth\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095822 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-config\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095854 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-combined-ca-bundle\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.102735 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llpdc\" (UniqueName: \"kubernetes.io/projected/6af89ced-7c28-41a8-9446-c90f8951bd84-kube-api-access-llpdc\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.115830 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-combined-ca-bundle\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.119548 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-config\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.122983 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d9psc"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.138860 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdhth\" (UniqueName: \"kubernetes.io/projected/0aa749a9-f668-4927-8a9a-28df83640ac4-kube-api-access-xdhth\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.157590 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.187168 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.198836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-combined-ca-bundle\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.198874 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/782631b9-e01d-424c-af31-3471bfdf1587-etc-machine-id\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.198960 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-db-sync-config-data\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.198977 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-scripts\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.198993 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-config-data\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.199012 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxs4p\" (UniqueName: \"kubernetes.io/projected/782631b9-e01d-424c-af31-3471bfdf1587-kube-api-access-hxs4p\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.211778 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-wnsq8"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.213178 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.213948 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/782631b9-e01d-424c-af31-3471bfdf1587-etc-machine-id\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.217739 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-db-sync-config-data\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.218192 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.218371 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pfhb5" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.226028 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-combined-ca-bundle\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.234447 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-config-data\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.264768 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wnsq8"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.266283 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-scripts\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.283344 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.305380 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-db-sync-config-data\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.305479 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-combined-ca-bundle\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.305530 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn7gd\" (UniqueName: \"kubernetes.io/projected/9b3dba79-45f7-4154-9691-fa333ba6ad0d-kube-api-access-wn7gd\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.337753 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxs4p\" (UniqueName: \"kubernetes.io/projected/782631b9-e01d-424c-af31-3471bfdf1587-kube-api-access-hxs4p\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.358766 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-h5khg"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.385991 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" podUID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerName="dnsmasq-dns" containerID="cri-o://8da4058576fe488581a9362856eb7b6903ab379a61631bef26d1ff5e20139718" gracePeriod=10 Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.408070 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-kwf9k"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.409316 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.410903 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.412571 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-db-sync-config-data\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.412765 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-combined-ca-bundle\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.412900 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn7gd\" (UniqueName: \"kubernetes.io/projected/9b3dba79-45f7-4154-9691-fa333ba6ad0d-kube-api-access-wn7gd\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.418729 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-db-sync-config-data\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.419978 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-combined-ca-bundle\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.432862 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.433027 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4cztd" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.433099 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.471688 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn7gd\" (UniqueName: \"kubernetes.io/projected/9b3dba79-45f7-4154-9691-fa333ba6ad0d-kube-api-access-wn7gd\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.475786 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kwf9k"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.517924 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1302a491-8b5e-4d96-a192-ae81c6396870-logs\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.518022 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml5bj\" (UniqueName: \"kubernetes.io/projected/1302a491-8b5e-4d96-a192-ae81c6396870-kube-api-access-ml5bj\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.518063 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-scripts\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.518089 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-combined-ca-bundle\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.518121 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-config-data\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.536229 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.538826 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.548624 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.558118 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.573042 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-d4vvp"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.574936 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.607254 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.611543 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621171 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621229 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-scripts\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621269 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ww2q\" (UniqueName: \"kubernetes.io/projected/a9a54657-2d65-421e-85bb-f2e8a6eec51d-kube-api-access-8ww2q\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621302 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621382 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621406 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621456 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1302a491-8b5e-4d96-a192-ae81c6396870-logs\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621489 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621513 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-config-data\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621575 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-log-httpd\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621601 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lctwn\" (UniqueName: \"kubernetes.io/projected/ff736eba-5e3e-4608-8f3f-13783efb0735-kube-api-access-lctwn\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621627 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml5bj\" (UniqueName: \"kubernetes.io/projected/1302a491-8b5e-4d96-a192-ae81c6396870-kube-api-access-ml5bj\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621652 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-config\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621692 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621739 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-scripts\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621772 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-combined-ca-bundle\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621806 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-run-httpd\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621839 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-config-data\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.628055 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1302a491-8b5e-4d96-a192-ae81c6396870-logs\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.643054 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.643240 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-combined-ca-bundle\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.649439 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-scripts\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.658087 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml5bj\" (UniqueName: \"kubernetes.io/projected/1302a491-8b5e-4d96-a192-ae81c6396870-kube-api-access-ml5bj\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.667809 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-config-data\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.691307 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-d4vvp"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725250 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725300 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-scripts\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725328 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ww2q\" (UniqueName: \"kubernetes.io/projected/a9a54657-2d65-421e-85bb-f2e8a6eec51d-kube-api-access-8ww2q\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725350 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725425 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725452 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725491 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725522 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-config-data\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725568 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-log-httpd\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725587 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lctwn\" (UniqueName: \"kubernetes.io/projected/ff736eba-5e3e-4608-8f3f-13783efb0735-kube-api-access-lctwn\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725606 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-config\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725640 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725682 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-run-httpd\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.726190 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-run-httpd\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.728547 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.732505 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.733206 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-config\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.733764 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.734218 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-scripts\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.734377 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.734651 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-log-httpd\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.752978 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-config-data\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.753258 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.753887 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.765867 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lctwn\" (UniqueName: \"kubernetes.io/projected/ff736eba-5e3e-4608-8f3f-13783efb0735-kube-api-access-lctwn\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.772636 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ww2q\" (UniqueName: \"kubernetes.io/projected/a9a54657-2d65-421e-85bb-f2e8a6eec51d-kube-api-access-8ww2q\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.782217 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.880065 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.932157 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.197474 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vthx6"] Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.213588 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-92qzx"] Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.399068 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.407021 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-92qzx" event={"ID":"dce2c706-6c24-4be8-b347-90448de8aaf9","Type":"ContainerStarted","Data":"852459d3b2b553dabaa3fb65bc625cef07f0159ca47f92b91b195c4c5a7e2463"} Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.408851 4761 generic.go:334] "Generic (PLEG): container finished" podID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerID="8da4058576fe488581a9362856eb7b6903ab379a61631bef26d1ff5e20139718" exitCode=0 Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.408902 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" event={"ID":"da5c6e23-f4e6-4c38-8801-89453ef0b91a","Type":"ContainerDied","Data":"8da4058576fe488581a9362856eb7b6903ab379a61631bef26d1ff5e20139718"} Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.410772 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vthx6" event={"ID":"0aa749a9-f668-4927-8a9a-28df83640ac4","Type":"ContainerStarted","Data":"3804d71548a0e319d2c143c8d2c80e16b33d66372af96bec47ce524515f0bd80"} Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.423597 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.499555 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-g9w2m" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.635061 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-h5khg"] Mar 07 08:11:37 crc kubenswrapper[4761]: W0307 08:11:37.639353 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2b79c3f_674b_4b5b_aced_27b6918c1bcb.slice/crio-6ca9da70d661ad98a330b550382412cf824a3e7cb0d294e640122381ca49a0e4 WatchSource:0}: Error finding container 6ca9da70d661ad98a330b550382412cf824a3e7cb0d294e640122381ca49a0e4: Status 404 returned error can't find the container with id 6ca9da70d661ad98a330b550382412cf824a3e7cb0d294e640122381ca49a0e4 Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.653771 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vntzs"] Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.662766 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-combined-ca-bundle\") pod \"a990e713-634f-47c4-acbe-980ed66d30fe\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.662837 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-config-data\") pod \"a990e713-634f-47c4-acbe-980ed66d30fe\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.662911 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-db-sync-config-data\") pod \"a990e713-634f-47c4-acbe-980ed66d30fe\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.663161 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x75n7\" (UniqueName: \"kubernetes.io/projected/a990e713-634f-47c4-acbe-980ed66d30fe-kube-api-access-x75n7\") pod \"a990e713-634f-47c4-acbe-980ed66d30fe\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.671345 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a990e713-634f-47c4-acbe-980ed66d30fe-kube-api-access-x75n7" (OuterVolumeSpecName: "kube-api-access-x75n7") pod "a990e713-634f-47c4-acbe-980ed66d30fe" (UID: "a990e713-634f-47c4-acbe-980ed66d30fe"). InnerVolumeSpecName "kube-api-access-x75n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.672900 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a990e713-634f-47c4-acbe-980ed66d30fe" (UID: "a990e713-634f-47c4-acbe-980ed66d30fe"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.684979 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x75n7\" (UniqueName: \"kubernetes.io/projected/a990e713-634f-47c4-acbe-980ed66d30fe-kube-api-access-x75n7\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.685306 4761 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.751023 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a990e713-634f-47c4-acbe-980ed66d30fe" (UID: "a990e713-634f-47c4-acbe-980ed66d30fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.774825 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wnsq8"] Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.788052 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.827847 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kwf9k"] Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.829493 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.834960 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-config-data" (OuterVolumeSpecName: "config-data") pod "a990e713-634f-47c4-acbe-980ed66d30fe" (UID: "a990e713-634f-47c4-acbe-980ed66d30fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.840612 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d9psc"] Mar 07 08:11:37 crc kubenswrapper[4761]: W0307 08:11:37.854308 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1302a491_8b5e_4d96_a192_ae81c6396870.slice/crio-37069611aa3b30f5ad29c74502df3567823a99c10fc10de76b428ece21310540 WatchSource:0}: Error finding container 37069611aa3b30f5ad29c74502df3567823a99c10fc10de76b428ece21310540: Status 404 returned error can't find the container with id 37069611aa3b30f5ad29c74502df3567823a99c10fc10de76b428ece21310540 Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.908519 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-swift-storage-0\") pod \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.908638 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmwlb\" (UniqueName: \"kubernetes.io/projected/da5c6e23-f4e6-4c38-8801-89453ef0b91a-kube-api-access-wmwlb\") pod \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.909009 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-config\") pod \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.909086 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-sb\") pod \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.909144 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-svc\") pod \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.909639 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-nb\") pod \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.910735 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.930218 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.945966 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da5c6e23-f4e6-4c38-8801-89453ef0b91a-kube-api-access-wmwlb" (OuterVolumeSpecName: "kube-api-access-wmwlb") pod "da5c6e23-f4e6-4c38-8801-89453ef0b91a" (UID: "da5c6e23-f4e6-4c38-8801-89453ef0b91a"). InnerVolumeSpecName "kube-api-access-wmwlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.977666 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-d4vvp"] Mar 07 08:11:38 crc kubenswrapper[4761]: W0307 08:11:38.008840 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9a54657_2d65_421e_85bb_f2e8a6eec51d.slice/crio-4ffde51a30e0b77601e0b484690099e34145be0311d43294f7f4e9945298fe39 WatchSource:0}: Error finding container 4ffde51a30e0b77601e0b484690099e34145be0311d43294f7f4e9945298fe39: Status 404 returned error can't find the container with id 4ffde51a30e0b77601e0b484690099e34145be0311d43294f7f4e9945298fe39 Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.014539 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmwlb\" (UniqueName: \"kubernetes.io/projected/da5c6e23-f4e6-4c38-8801-89453ef0b91a-kube-api-access-wmwlb\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.220332 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.335154 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-config" (OuterVolumeSpecName: "config") pod "da5c6e23-f4e6-4c38-8801-89453ef0b91a" (UID: "da5c6e23-f4e6-4c38-8801-89453ef0b91a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.348813 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da5c6e23-f4e6-4c38-8801-89453ef0b91a" (UID: "da5c6e23-f4e6-4c38-8801-89453ef0b91a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.378226 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da5c6e23-f4e6-4c38-8801-89453ef0b91a" (UID: "da5c6e23-f4e6-4c38-8801-89453ef0b91a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.384293 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "da5c6e23-f4e6-4c38-8801-89453ef0b91a" (UID: "da5c6e23-f4e6-4c38-8801-89453ef0b91a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.392094 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da5c6e23-f4e6-4c38-8801-89453ef0b91a" (UID: "da5c6e23-f4e6-4c38-8801-89453ef0b91a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.425994 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.426323 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.426336 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.426347 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.426359 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.458438 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vthx6" event={"ID":"0aa749a9-f668-4927-8a9a-28df83640ac4","Type":"ContainerStarted","Data":"894118d7d8b95a32c8f3ddf3e2f498ea4edd0ef3d4c6251c424e04fb6574d11a"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.472897 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kwf9k" event={"ID":"1302a491-8b5e-4d96-a192-ae81c6396870","Type":"ContainerStarted","Data":"37069611aa3b30f5ad29c74502df3567823a99c10fc10de76b428ece21310540"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.480099 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d9psc" event={"ID":"782631b9-e01d-424c-af31-3471bfdf1587","Type":"ContainerStarted","Data":"57fe1c0b330204d6c39c8493ef2a297ed02920ab824fcfb73ae311a94daa5c9c"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.483211 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-g9w2m" event={"ID":"a990e713-634f-47c4-acbe-980ed66d30fe","Type":"ContainerDied","Data":"cee1ba9976056eb67ee81744b62970e36b05720039ac0e14a5708002d899744d"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.483237 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cee1ba9976056eb67ee81744b62970e36b05720039ac0e14a5708002d899744d" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.483312 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-g9w2m" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.489475 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vthx6" podStartSLOduration=3.489452301 podStartE2EDuration="3.489452301s" podCreationTimestamp="2026-03-07 08:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:38.473339702 +0000 UTC m=+1355.382506177" watchObservedRunningTime="2026-03-07 08:11:38.489452301 +0000 UTC m=+1355.398618776" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.496142 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" event={"ID":"f2b79c3f-674b-4b5b-aced-27b6918c1bcb","Type":"ContainerStarted","Data":"772989b70eec3b548dee037094b02d89023b3589a3f2ed8a8189fbe364d5c076"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.496208 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" event={"ID":"f2b79c3f-674b-4b5b-aced-27b6918c1bcb","Type":"ContainerStarted","Data":"6ca9da70d661ad98a330b550382412cf824a3e7cb0d294e640122381ca49a0e4"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.508241 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerStarted","Data":"da1033284673b02ff41b3d930dbfee0b2953cef69b3b38ce497df0dcfce3925a"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.516300 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" event={"ID":"da5c6e23-f4e6-4c38-8801-89453ef0b91a","Type":"ContainerDied","Data":"3cbd8d27644c5a92e8a51f2f09491a39423c7c677204bd57749d30ef21c68d85"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.516361 4761 scope.go:117] "RemoveContainer" containerID="8da4058576fe488581a9362856eb7b6903ab379a61631bef26d1ff5e20139718" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.516505 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.524231 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wnsq8" event={"ID":"9b3dba79-45f7-4154-9691-fa333ba6ad0d","Type":"ContainerStarted","Data":"a3118b0da5de11a281c834601ee472fce42e89b13a7f308dbb3bfacc88e63820"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.548404 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vntzs" event={"ID":"6af89ced-7c28-41a8-9446-c90f8951bd84","Type":"ContainerStarted","Data":"5468dd9272cfb94f64e60fd95f4a2837460a1196ebd1cf21d856f7fa46025406"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.548453 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vntzs" event={"ID":"6af89ced-7c28-41a8-9446-c90f8951bd84","Type":"ContainerStarted","Data":"a88d6f67863961e6e38074aff245b0a56641eda60bc9de889d89752cbd09fcbd"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.551660 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" event={"ID":"a9a54657-2d65-421e-85bb-f2e8a6eec51d","Type":"ContainerStarted","Data":"4ffde51a30e0b77601e0b484690099e34145be0311d43294f7f4e9945298fe39"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.572048 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.588179 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vntzs" podStartSLOduration=3.58815507 podStartE2EDuration="3.58815507s" podCreationTimestamp="2026-03-07 08:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:38.570654855 +0000 UTC m=+1355.479821330" watchObservedRunningTime="2026-03-07 08:11:38.58815507 +0000 UTC m=+1355.497321545" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.615903 4761 scope.go:117] "RemoveContainer" containerID="14da72442b10d10498c7e858f38e4f1fb7e091af17ae06c7023832789db215bc" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.754415 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cbkln"] Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.823236 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cbkln"] Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.188478 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-d4vvp"] Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.248683 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fmpdp"] Mar 07 08:11:39 crc kubenswrapper[4761]: E0307 08:11:39.249191 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerName="dnsmasq-dns" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.249209 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerName="dnsmasq-dns" Mar 07 08:11:39 crc kubenswrapper[4761]: E0307 08:11:39.249247 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a990e713-634f-47c4-acbe-980ed66d30fe" containerName="glance-db-sync" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.249254 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a990e713-634f-47c4-acbe-980ed66d30fe" containerName="glance-db-sync" Mar 07 08:11:39 crc kubenswrapper[4761]: E0307 08:11:39.249274 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerName="init" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.249280 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerName="init" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.249456 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerName="dnsmasq-dns" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.249476 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a990e713-634f-47c4-acbe-980ed66d30fe" containerName="glance-db-sync" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.250608 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.280434 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fmpdp"] Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.373401 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.373467 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.373535 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.373667 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zknfr\" (UniqueName: \"kubernetes.io/projected/538ded96-3415-417f-8b82-5e29c85bf943-kube-api-access-zknfr\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.373773 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-config\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.373876 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.476227 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zknfr\" (UniqueName: \"kubernetes.io/projected/538ded96-3415-417f-8b82-5e29c85bf943-kube-api-access-zknfr\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.476289 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-config\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.476340 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.476406 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.476429 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.476460 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.477186 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.477549 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.477691 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.478143 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.481581 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-config\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.507486 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zknfr\" (UniqueName: \"kubernetes.io/projected/538ded96-3415-417f-8b82-5e29c85bf943-kube-api-access-zknfr\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.592994 4761 generic.go:334] "Generic (PLEG): container finished" podID="f2b79c3f-674b-4b5b-aced-27b6918c1bcb" containerID="772989b70eec3b548dee037094b02d89023b3589a3f2ed8a8189fbe364d5c076" exitCode=0 Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.593874 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" event={"ID":"f2b79c3f-674b-4b5b-aced-27b6918c1bcb","Type":"ContainerDied","Data":"772989b70eec3b548dee037094b02d89023b3589a3f2ed8a8189fbe364d5c076"} Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.593952 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" event={"ID":"f2b79c3f-674b-4b5b-aced-27b6918c1bcb","Type":"ContainerDied","Data":"6ca9da70d661ad98a330b550382412cf824a3e7cb0d294e640122381ca49a0e4"} Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.593969 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ca9da70d661ad98a330b550382412cf824a3e7cb0d294e640122381ca49a0e4" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.628151 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.629672 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.681425 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq47r\" (UniqueName: \"kubernetes.io/projected/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-kube-api-access-pq47r\") pod \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.681500 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-config\") pod \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.681597 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-svc\") pod \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.681620 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-swift-storage-0\") pod \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.681669 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-sb\") pod \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.681779 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-nb\") pod \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.709184 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-kube-api-access-pq47r" (OuterVolumeSpecName: "kube-api-access-pq47r") pod "f2b79c3f-674b-4b5b-aced-27b6918c1bcb" (UID: "f2b79c3f-674b-4b5b-aced-27b6918c1bcb"). InnerVolumeSpecName "kube-api-access-pq47r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.776435 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f2b79c3f-674b-4b5b-aced-27b6918c1bcb" (UID: "f2b79c3f-674b-4b5b-aced-27b6918c1bcb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.779904 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2b79c3f-674b-4b5b-aced-27b6918c1bcb" (UID: "f2b79c3f-674b-4b5b-aced-27b6918c1bcb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.799827 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.800053 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.800141 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq47r\" (UniqueName: \"kubernetes.io/projected/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-kube-api-access-pq47r\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.804293 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2b79c3f-674b-4b5b-aced-27b6918c1bcb" (UID: "f2b79c3f-674b-4b5b-aced-27b6918c1bcb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.805059 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2b79c3f-674b-4b5b-aced-27b6918c1bcb" (UID: "f2b79c3f-674b-4b5b-aced-27b6918c1bcb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.810411 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" path="/var/lib/kubelet/pods/da5c6e23-f4e6-4c38-8801-89453ef0b91a/volumes" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.815990 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-config" (OuterVolumeSpecName: "config") pod "f2b79c3f-674b-4b5b-aced-27b6918c1bcb" (UID: "f2b79c3f-674b-4b5b-aced-27b6918c1bcb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.907410 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.907454 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.907467 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.990005 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:11:39 crc kubenswrapper[4761]: E0307 08:11:39.994058 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b79c3f-674b-4b5b-aced-27b6918c1bcb" containerName="init" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.994086 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b79c3f-674b-4b5b-aced-27b6918c1bcb" containerName="init" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.994497 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b79c3f-674b-4b5b-aced-27b6918c1bcb" containerName="init" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.995753 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:39.997844 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:39.997844 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zmqzm" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:39.997848 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.008420 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.393461 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.394823 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2k6h\" (UniqueName: \"kubernetes.io/projected/ade806c4-9da8-4204-b97b-35f0d84ffeb6-kube-api-access-t2k6h\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.394860 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.394957 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-config-data\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.395007 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-logs\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.395107 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-scripts\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.395140 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499341 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2k6h\" (UniqueName: \"kubernetes.io/projected/ade806c4-9da8-4204-b97b-35f0d84ffeb6-kube-api-access-t2k6h\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499371 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499417 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-config-data\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499439 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-logs\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499486 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-scripts\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499507 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499603 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499947 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.501126 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-logs\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.505700 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.505781 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f1ce1c096842b9627111c5f89fad26fafb9d1f61d1f48c8efc1ee653de0d59a3/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.506119 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.507925 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-config-data\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.511152 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-scripts\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.530917 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2k6h\" (UniqueName: \"kubernetes.io/projected/ade806c4-9da8-4204-b97b-35f0d84ffeb6-kube-api-access-t2k6h\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.587964 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.629724 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.647586 4761 generic.go:334] "Generic (PLEG): container finished" podID="a9a54657-2d65-421e-85bb-f2e8a6eec51d" containerID="5a0eb6491da943111402b16d7cfbce642592a9341493289d4f64ca227baef728" exitCode=0 Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.647726 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.648412 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" event={"ID":"a9a54657-2d65-421e-85bb-f2e8a6eec51d","Type":"ContainerDied","Data":"5a0eb6491da943111402b16d7cfbce642592a9341493289d4f64ca227baef728"} Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.678851 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fmpdp"] Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.719031 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.721236 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.730506 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.811820 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.811905 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-logs\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.811995 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-config-data\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.812072 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.812183 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-scripts\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.812331 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqplz\" (UniqueName: \"kubernetes.io/projected/93d41018-801a-4081-8e8f-5f8809cb0e41-kube-api-access-pqplz\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.812423 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.826861 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.915993 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqplz\" (UniqueName: \"kubernetes.io/projected/93d41018-801a-4081-8e8f-5f8809cb0e41-kube-api-access-pqplz\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.916095 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.916183 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.916210 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-logs\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.916277 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-config-data\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.916334 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.916395 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-scripts\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.921990 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.922807 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-logs\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.926984 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.927033 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/851ce73d1b192d58f34aae6f8e819bd73d3fa6a2538f169362f333663b0c473e/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.931037 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-scripts\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.932184 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-config-data\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.950022 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.977774 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqplz\" (UniqueName: \"kubernetes.io/projected/93d41018-801a-4081-8e8f-5f8809cb0e41-kube-api-access-pqplz\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.990800 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.015241 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-h5khg"] Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.039275 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-h5khg"] Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.222252 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.263390 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.337627 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-svc\") pod \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.337749 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ww2q\" (UniqueName: \"kubernetes.io/projected/a9a54657-2d65-421e-85bb-f2e8a6eec51d-kube-api-access-8ww2q\") pod \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.337819 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-sb\") pod \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.337915 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-config\") pod \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.337990 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-swift-storage-0\") pod \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.338076 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-nb\") pod \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.343079 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9a54657-2d65-421e-85bb-f2e8a6eec51d-kube-api-access-8ww2q" (OuterVolumeSpecName: "kube-api-access-8ww2q") pod "a9a54657-2d65-421e-85bb-f2e8a6eec51d" (UID: "a9a54657-2d65-421e-85bb-f2e8a6eec51d"). InnerVolumeSpecName "kube-api-access-8ww2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.414477 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-config" (OuterVolumeSpecName: "config") pod "a9a54657-2d65-421e-85bb-f2e8a6eec51d" (UID: "a9a54657-2d65-421e-85bb-f2e8a6eec51d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.415094 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a9a54657-2d65-421e-85bb-f2e8a6eec51d" (UID: "a9a54657-2d65-421e-85bb-f2e8a6eec51d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.417644 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a9a54657-2d65-421e-85bb-f2e8a6eec51d" (UID: "a9a54657-2d65-421e-85bb-f2e8a6eec51d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.440272 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a9a54657-2d65-421e-85bb-f2e8a6eec51d" (UID: "a9a54657-2d65-421e-85bb-f2e8a6eec51d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.441657 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.441674 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.441683 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ww2q\" (UniqueName: \"kubernetes.io/projected/a9a54657-2d65-421e-85bb-f2e8a6eec51d-kube-api-access-8ww2q\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.441693 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.441701 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.456659 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a9a54657-2d65-421e-85bb-f2e8a6eec51d" (UID: "a9a54657-2d65-421e-85bb-f2e8a6eec51d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.543758 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.584871 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.690669 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ade806c4-9da8-4204-b97b-35f0d84ffeb6","Type":"ContainerStarted","Data":"2b2bed0c69200cc45c2dbf10fa19c8940ea196711052d4b23aa0df44bce1ab2e"} Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.693936 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" event={"ID":"a9a54657-2d65-421e-85bb-f2e8a6eec51d","Type":"ContainerDied","Data":"4ffde51a30e0b77601e0b484690099e34145be0311d43294f7f4e9945298fe39"} Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.693997 4761 scope.go:117] "RemoveContainer" containerID="5a0eb6491da943111402b16d7cfbce642592a9341493289d4f64ca227baef728" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.693994 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.760705 4761 generic.go:334] "Generic (PLEG): container finished" podID="538ded96-3415-417f-8b82-5e29c85bf943" containerID="9186130caa21c79ac5e7c5f0448f28d89ff465dd03e9542cb0fa32079fc08ea6" exitCode=0 Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.772627 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b79c3f-674b-4b5b-aced-27b6918c1bcb" path="/var/lib/kubelet/pods/f2b79c3f-674b-4b5b-aced-27b6918c1bcb/volumes" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.773424 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" event={"ID":"538ded96-3415-417f-8b82-5e29c85bf943","Type":"ContainerDied","Data":"9186130caa21c79ac5e7c5f0448f28d89ff465dd03e9542cb0fa32079fc08ea6"} Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.773451 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" event={"ID":"538ded96-3415-417f-8b82-5e29c85bf943","Type":"ContainerStarted","Data":"8b61496bdc0ba9cdbb66cd8ab5e8c4b517098cd8c010b8f4c5a0c0fd26c3cd65"} Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.857753 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-d4vvp"] Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.870615 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-d4vvp"] Mar 07 08:11:42 crc kubenswrapper[4761]: I0307 08:11:42.293441 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:11:42 crc kubenswrapper[4761]: W0307 08:11:42.324445 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93d41018_801a_4081_8e8f_5f8809cb0e41.slice/crio-4eaf66b670ef081e714a15a16a15827499d9c0af073870562be41b7d510fef4e WatchSource:0}: Error finding container 4eaf66b670ef081e714a15a16a15827499d9c0af073870562be41b7d510fef4e: Status 404 returned error can't find the container with id 4eaf66b670ef081e714a15a16a15827499d9c0af073870562be41b7d510fef4e Mar 07 08:11:42 crc kubenswrapper[4761]: I0307 08:11:42.777710 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93d41018-801a-4081-8e8f-5f8809cb0e41","Type":"ContainerStarted","Data":"4eaf66b670ef081e714a15a16a15827499d9c0af073870562be41b7d510fef4e"} Mar 07 08:11:42 crc kubenswrapper[4761]: I0307 08:11:42.781264 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" event={"ID":"538ded96-3415-417f-8b82-5e29c85bf943","Type":"ContainerStarted","Data":"4731ffe1012d05ac8d1c43ca1ea7417657ab91649645937ec299cefb6cbc4e8c"} Mar 07 08:11:42 crc kubenswrapper[4761]: I0307 08:11:42.782938 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:42 crc kubenswrapper[4761]: I0307 08:11:42.825925 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" podStartSLOduration=3.825906829 podStartE2EDuration="3.825906829s" podCreationTimestamp="2026-03-07 08:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:42.81296003 +0000 UTC m=+1359.722126495" watchObservedRunningTime="2026-03-07 08:11:42.825906829 +0000 UTC m=+1359.735073304" Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.737641 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a54657-2d65-421e-85bb-f2e8a6eec51d" path="/var/lib/kubelet/pods/a9a54657-2d65-421e-85bb-f2e8a6eec51d/volumes" Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.767957 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.768016 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.768062 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.768900 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c720defb28c06a1aa2b8b26acca0b7c32fc87b6223c85d1c22d3f2b9565b9ee4"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.768960 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://c720defb28c06a1aa2b8b26acca0b7c32fc87b6223c85d1c22d3f2b9565b9ee4" gracePeriod=600 Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.823031 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ade806c4-9da8-4204-b97b-35f0d84ffeb6","Type":"ContainerStarted","Data":"49f77926d06e2e04414379285e8dc0cfdc7b0497e02b0d16e01e837153f8b1c6"} Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.830217 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93d41018-801a-4081-8e8f-5f8809cb0e41","Type":"ContainerStarted","Data":"f3ab7c6d5a03f429467b96c88afc5ca3cbd8e769a0f3e529010526f7deb8dcae"} Mar 07 08:11:44 crc kubenswrapper[4761]: I0307 08:11:44.857128 4761 generic.go:334] "Generic (PLEG): container finished" podID="6af89ced-7c28-41a8-9446-c90f8951bd84" containerID="5468dd9272cfb94f64e60fd95f4a2837460a1196ebd1cf21d856f7fa46025406" exitCode=0 Mar 07 08:11:44 crc kubenswrapper[4761]: I0307 08:11:44.857199 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vntzs" event={"ID":"6af89ced-7c28-41a8-9446-c90f8951bd84","Type":"ContainerDied","Data":"5468dd9272cfb94f64e60fd95f4a2837460a1196ebd1cf21d856f7fa46025406"} Mar 07 08:11:44 crc kubenswrapper[4761]: I0307 08:11:44.862529 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="c720defb28c06a1aa2b8b26acca0b7c32fc87b6223c85d1c22d3f2b9565b9ee4" exitCode=0 Mar 07 08:11:44 crc kubenswrapper[4761]: I0307 08:11:44.863171 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"c720defb28c06a1aa2b8b26acca0b7c32fc87b6223c85d1c22d3f2b9565b9ee4"} Mar 07 08:11:44 crc kubenswrapper[4761]: I0307 08:11:44.863227 4761 scope.go:117] "RemoveContainer" containerID="aca69e929765f604d6be340ee9bf2395b19b14b626bf0c5263eb403497f029cf" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.027694 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.099598 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.695645 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.818027 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llpdc\" (UniqueName: \"kubernetes.io/projected/6af89ced-7c28-41a8-9446-c90f8951bd84-kube-api-access-llpdc\") pod \"6af89ced-7c28-41a8-9446-c90f8951bd84\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.818391 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-combined-ca-bundle\") pod \"6af89ced-7c28-41a8-9446-c90f8951bd84\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.818439 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-fernet-keys\") pod \"6af89ced-7c28-41a8-9446-c90f8951bd84\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.818539 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-scripts\") pod \"6af89ced-7c28-41a8-9446-c90f8951bd84\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.819038 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-credential-keys\") pod \"6af89ced-7c28-41a8-9446-c90f8951bd84\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.819087 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-config-data\") pod \"6af89ced-7c28-41a8-9446-c90f8951bd84\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.828496 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af89ced-7c28-41a8-9446-c90f8951bd84-kube-api-access-llpdc" (OuterVolumeSpecName: "kube-api-access-llpdc") pod "6af89ced-7c28-41a8-9446-c90f8951bd84" (UID: "6af89ced-7c28-41a8-9446-c90f8951bd84"). InnerVolumeSpecName "kube-api-access-llpdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.828631 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-scripts" (OuterVolumeSpecName: "scripts") pod "6af89ced-7c28-41a8-9446-c90f8951bd84" (UID: "6af89ced-7c28-41a8-9446-c90f8951bd84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.835004 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6af89ced-7c28-41a8-9446-c90f8951bd84" (UID: "6af89ced-7c28-41a8-9446-c90f8951bd84"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.835127 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6af89ced-7c28-41a8-9446-c90f8951bd84" (UID: "6af89ced-7c28-41a8-9446-c90f8951bd84"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.868790 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-config-data" (OuterVolumeSpecName: "config-data") pod "6af89ced-7c28-41a8-9446-c90f8951bd84" (UID: "6af89ced-7c28-41a8-9446-c90f8951bd84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.917870 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6af89ced-7c28-41a8-9446-c90f8951bd84" (UID: "6af89ced-7c28-41a8-9446-c90f8951bd84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.921882 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.921951 4761 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.921966 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.921979 4761 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.921990 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.922001 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llpdc\" (UniqueName: \"kubernetes.io/projected/6af89ced-7c28-41a8-9446-c90f8951bd84-kube-api-access-llpdc\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.928995 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vntzs" event={"ID":"6af89ced-7c28-41a8-9446-c90f8951bd84","Type":"ContainerDied","Data":"a88d6f67863961e6e38074aff245b0a56641eda60bc9de889d89752cbd09fcbd"} Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.929040 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a88d6f67863961e6e38074aff245b0a56641eda60bc9de889d89752cbd09fcbd" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.929094 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.971839 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vntzs"] Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.986376 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vntzs"] Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.074230 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mb4ct"] Mar 07 08:11:47 crc kubenswrapper[4761]: E0307 08:11:47.074684 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af89ced-7c28-41a8-9446-c90f8951bd84" containerName="keystone-bootstrap" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.074698 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af89ced-7c28-41a8-9446-c90f8951bd84" containerName="keystone-bootstrap" Mar 07 08:11:47 crc kubenswrapper[4761]: E0307 08:11:47.074748 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a54657-2d65-421e-85bb-f2e8a6eec51d" containerName="init" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.074754 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a54657-2d65-421e-85bb-f2e8a6eec51d" containerName="init" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.074951 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a54657-2d65-421e-85bb-f2e8a6eec51d" containerName="init" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.074970 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af89ced-7c28-41a8-9446-c90f8951bd84" containerName="keystone-bootstrap" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.075853 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.078734 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.078749 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.078827 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.078874 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.082273 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pgh8w" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.089496 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mb4ct"] Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.126659 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-credential-keys\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.126771 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-scripts\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.126845 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-config-data\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.126866 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-combined-ca-bundle\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.126908 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pndwr\" (UniqueName: \"kubernetes.io/projected/30f40316-2c99-4892-b3c5-9e3e61f05212-kube-api-access-pndwr\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.126935 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-fernet-keys\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.228282 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-fernet-keys\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.228358 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-credential-keys\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.228425 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-scripts\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.228496 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-config-data\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.228515 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-combined-ca-bundle\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.228553 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pndwr\" (UniqueName: \"kubernetes.io/projected/30f40316-2c99-4892-b3c5-9e3e61f05212-kube-api-access-pndwr\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.233752 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-fernet-keys\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.233956 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-config-data\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.244135 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-scripts\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.244293 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-combined-ca-bundle\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.244625 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-credential-keys\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.264153 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pndwr\" (UniqueName: \"kubernetes.io/projected/30f40316-2c99-4892-b3c5-9e3e61f05212-kube-api-access-pndwr\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.392481 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.722265 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af89ced-7c28-41a8-9446-c90f8951bd84" path="/var/lib/kubelet/pods/6af89ced-7c28-41a8-9446-c90f8951bd84/volumes" Mar 07 08:11:49 crc kubenswrapper[4761]: I0307 08:11:49.629919 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:49 crc kubenswrapper[4761]: I0307 08:11:49.761848 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5r7cq"] Mar 07 08:11:49 crc kubenswrapper[4761]: I0307 08:11:49.762905 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-5r7cq" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" containerID="cri-o://a3c3a2734ca2fdfedd4aa0341d15fa1948d68cb529469c8595db30900e537e2e" gracePeriod=10 Mar 07 08:11:52 crc kubenswrapper[4761]: I0307 08:11:52.368952 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-5r7cq" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: connect: connection refused" Mar 07 08:11:52 crc kubenswrapper[4761]: I0307 08:11:52.479138 4761 generic.go:334] "Generic (PLEG): container finished" podID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerID="a3c3a2734ca2fdfedd4aa0341d15fa1948d68cb529469c8595db30900e537e2e" exitCode=0 Mar 07 08:11:52 crc kubenswrapper[4761]: I0307 08:11:52.479189 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5r7cq" event={"ID":"067b5424-8f75-4bb9-ab09-588e4e306a28","Type":"ContainerDied","Data":"a3c3a2734ca2fdfedd4aa0341d15fa1948d68cb529469c8595db30900e537e2e"} Mar 07 08:11:53 crc kubenswrapper[4761]: E0307 08:11:53.005810 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Mar 07 08:11:53 crc kubenswrapper[4761]: E0307 08:11:53.006755 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ml5bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-kwf9k_openstack(1302a491-8b5e-4d96-a192-ae81c6396870): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:11:53 crc kubenswrapper[4761]: E0307 08:11:53.009903 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-kwf9k" podUID="1302a491-8b5e-4d96-a192-ae81c6396870" Mar 07 08:11:53 crc kubenswrapper[4761]: E0307 08:11:53.500147 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-kwf9k" podUID="1302a491-8b5e-4d96-a192-ae81c6396870" Mar 07 08:11:54 crc kubenswrapper[4761]: I0307 08:11:54.505308 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ade806c4-9da8-4204-b97b-35f0d84ffeb6","Type":"ContainerStarted","Data":"d0d15d62ed51411748f8b5cd631263da5c641011d92d42e6b0ec72249a793814"} Mar 07 08:11:54 crc kubenswrapper[4761]: I0307 08:11:54.505799 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-log" containerID="cri-o://49f77926d06e2e04414379285e8dc0cfdc7b0497e02b0d16e01e837153f8b1c6" gracePeriod=30 Mar 07 08:11:54 crc kubenswrapper[4761]: I0307 08:11:54.506358 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-httpd" containerID="cri-o://d0d15d62ed51411748f8b5cd631263da5c641011d92d42e6b0ec72249a793814" gracePeriod=30 Mar 07 08:11:54 crc kubenswrapper[4761]: I0307 08:11:54.534312 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=16.534292852 podStartE2EDuration="16.534292852s" podCreationTimestamp="2026-03-07 08:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:54.52826291 +0000 UTC m=+1371.437429415" watchObservedRunningTime="2026-03-07 08:11:54.534292852 +0000 UTC m=+1371.443459327" Mar 07 08:11:55 crc kubenswrapper[4761]: I0307 08:11:55.531283 4761 generic.go:334] "Generic (PLEG): container finished" podID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerID="d0d15d62ed51411748f8b5cd631263da5c641011d92d42e6b0ec72249a793814" exitCode=0 Mar 07 08:11:55 crc kubenswrapper[4761]: I0307 08:11:55.531313 4761 generic.go:334] "Generic (PLEG): container finished" podID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerID="49f77926d06e2e04414379285e8dc0cfdc7b0497e02b0d16e01e837153f8b1c6" exitCode=143 Mar 07 08:11:55 crc kubenswrapper[4761]: I0307 08:11:55.531335 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ade806c4-9da8-4204-b97b-35f0d84ffeb6","Type":"ContainerDied","Data":"d0d15d62ed51411748f8b5cd631263da5c641011d92d42e6b0ec72249a793814"} Mar 07 08:11:55 crc kubenswrapper[4761]: I0307 08:11:55.531361 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ade806c4-9da8-4204-b97b-35f0d84ffeb6","Type":"ContainerDied","Data":"49f77926d06e2e04414379285e8dc0cfdc7b0497e02b0d16e01e837153f8b1c6"} Mar 07 08:11:56 crc kubenswrapper[4761]: E0307 08:11:56.416981 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Mar 07 08:11:56 crc kubenswrapper[4761]: E0307 08:11:56.417378 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn7gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-wnsq8_openstack(9b3dba79-45f7-4154-9691-fa333ba6ad0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:11:56 crc kubenswrapper[4761]: E0307 08:11:56.418592 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-wnsq8" podUID="9b3dba79-45f7-4154-9691-fa333ba6ad0d" Mar 07 08:11:56 crc kubenswrapper[4761]: E0307 08:11:56.544466 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-wnsq8" podUID="9b3dba79-45f7-4154-9691-fa333ba6ad0d" Mar 07 08:11:57 crc kubenswrapper[4761]: I0307 08:11:57.153601 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-5r7cq" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: connect: connection refused" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.143616 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547852-bt6bz"] Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.149018 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.150966 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.151621 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.152069 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.158507 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547852-bt6bz"] Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.208552 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jb8q\" (UniqueName: \"kubernetes.io/projected/dd21ae8c-0b60-48ed-b287-3f861535b5d6-kube-api-access-8jb8q\") pod \"auto-csr-approver-29547852-bt6bz\" (UID: \"dd21ae8c-0b60-48ed-b287-3f861535b5d6\") " pod="openshift-infra/auto-csr-approver-29547852-bt6bz" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.314532 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jb8q\" (UniqueName: \"kubernetes.io/projected/dd21ae8c-0b60-48ed-b287-3f861535b5d6-kube-api-access-8jb8q\") pod \"auto-csr-approver-29547852-bt6bz\" (UID: \"dd21ae8c-0b60-48ed-b287-3f861535b5d6\") " pod="openshift-infra/auto-csr-approver-29547852-bt6bz" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.336911 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jb8q\" (UniqueName: \"kubernetes.io/projected/dd21ae8c-0b60-48ed-b287-3f861535b5d6-kube-api-access-8jb8q\") pod \"auto-csr-approver-29547852-bt6bz\" (UID: \"dd21ae8c-0b60-48ed-b287-3f861535b5d6\") " pod="openshift-infra/auto-csr-approver-29547852-bt6bz" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.480201 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" Mar 07 08:12:07 crc kubenswrapper[4761]: I0307 08:12:07.154234 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-5r7cq" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: i/o timeout" Mar 07 08:12:07 crc kubenswrapper[4761]: I0307 08:12:07.155029 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:12:09 crc kubenswrapper[4761]: I0307 08:12:09.700304 4761 generic.go:334] "Generic (PLEG): container finished" podID="0aa749a9-f668-4927-8a9a-28df83640ac4" containerID="894118d7d8b95a32c8f3ddf3e2f498ea4edd0ef3d4c6251c424e04fb6574d11a" exitCode=0 Mar 07 08:12:09 crc kubenswrapper[4761]: I0307 08:12:09.700784 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vthx6" event={"ID":"0aa749a9-f668-4927-8a9a-28df83640ac4","Type":"ContainerDied","Data":"894118d7d8b95a32c8f3ddf3e2f498ea4edd0ef3d4c6251c424e04fb6574d11a"} Mar 07 08:12:10 crc kubenswrapper[4761]: E0307 08:12:10.413951 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Mar 07 08:12:10 crc kubenswrapper[4761]: E0307 08:12:10.414450 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66pdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-92qzx_openstack(dce2c706-6c24-4be8-b347-90448de8aaf9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:12:10 crc kubenswrapper[4761]: E0307 08:12:10.415577 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-92qzx" podUID="dce2c706-6c24-4be8-b347-90448de8aaf9" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.534038 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.630316 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.630367 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.674185 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrfs7\" (UniqueName: \"kubernetes.io/projected/067b5424-8f75-4bb9-ab09-588e4e306a28-kube-api-access-qrfs7\") pod \"067b5424-8f75-4bb9-ab09-588e4e306a28\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.674234 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-config\") pod \"067b5424-8f75-4bb9-ab09-588e4e306a28\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.674328 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-sb\") pod \"067b5424-8f75-4bb9-ab09-588e4e306a28\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.674440 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-dns-svc\") pod \"067b5424-8f75-4bb9-ab09-588e4e306a28\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.674504 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-nb\") pod \"067b5424-8f75-4bb9-ab09-588e4e306a28\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.683438 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/067b5424-8f75-4bb9-ab09-588e4e306a28-kube-api-access-qrfs7" (OuterVolumeSpecName: "kube-api-access-qrfs7") pod "067b5424-8f75-4bb9-ab09-588e4e306a28" (UID: "067b5424-8f75-4bb9-ab09-588e4e306a28"). InnerVolumeSpecName "kube-api-access-qrfs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.721415 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.721958 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5r7cq" event={"ID":"067b5424-8f75-4bb9-ab09-588e4e306a28","Type":"ContainerDied","Data":"d528b0ee5cdcc3c74b6be0125ba8b9050c5885a6808688d7b153ceddf46e1503"} Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.722030 4761 scope.go:117] "RemoveContainer" containerID="a3c3a2734ca2fdfedd4aa0341d15fa1948d68cb529469c8595db30900e537e2e" Mar 07 08:12:10 crc kubenswrapper[4761]: E0307 08:12:10.725008 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-92qzx" podUID="dce2c706-6c24-4be8-b347-90448de8aaf9" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.733398 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "067b5424-8f75-4bb9-ab09-588e4e306a28" (UID: "067b5424-8f75-4bb9-ab09-588e4e306a28"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.733458 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-config" (OuterVolumeSpecName: "config") pod "067b5424-8f75-4bb9-ab09-588e4e306a28" (UID: "067b5424-8f75-4bb9-ab09-588e4e306a28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.745260 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "067b5424-8f75-4bb9-ab09-588e4e306a28" (UID: "067b5424-8f75-4bb9-ab09-588e4e306a28"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.751494 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "067b5424-8f75-4bb9-ab09-588e4e306a28" (UID: "067b5424-8f75-4bb9-ab09-588e4e306a28"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.779293 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrfs7\" (UniqueName: \"kubernetes.io/projected/067b5424-8f75-4bb9-ab09-588e4e306a28-kube-api-access-qrfs7\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.779327 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.779336 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.779349 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.779357 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:11 crc kubenswrapper[4761]: I0307 08:12:11.060082 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5r7cq"] Mar 07 08:12:11 crc kubenswrapper[4761]: I0307 08:12:11.072092 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5r7cq"] Mar 07 08:12:11 crc kubenswrapper[4761]: I0307 08:12:11.717610 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" path="/var/lib/kubelet/pods/067b5424-8f75-4bb9-ab09-588e4e306a28/volumes" Mar 07 08:12:11 crc kubenswrapper[4761]: E0307 08:12:11.971124 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Mar 07 08:12:11 crc kubenswrapper[4761]: E0307 08:12:11.971332 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxs4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-d9psc_openstack(782631b9-e01d-424c-af31-3471bfdf1587): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:12:11 crc kubenswrapper[4761]: E0307 08:12:11.972523 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-d9psc" podUID="782631b9-e01d-424c-af31-3471bfdf1587" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.100093 4761 scope.go:117] "RemoveContainer" containerID="c498f4c379cf8807574cdc10a758374ae889e538fe1b9f03b94de8aa56f32a78" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.155056 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-5r7cq" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: i/o timeout" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.254022 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.284456 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vthx6" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.417092 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.424520 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2k6h\" (UniqueName: \"kubernetes.io/projected/ade806c4-9da8-4204-b97b-35f0d84ffeb6-kube-api-access-t2k6h\") pod \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.424637 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-httpd-run\") pod \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.424849 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-config-data\") pod \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.425139 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-logs\") pod \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.425228 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-combined-ca-bundle\") pod \"0aa749a9-f668-4927-8a9a-28df83640ac4\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.425263 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-combined-ca-bundle\") pod \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.425280 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdhth\" (UniqueName: \"kubernetes.io/projected/0aa749a9-f668-4927-8a9a-28df83640ac4-kube-api-access-xdhth\") pod \"0aa749a9-f668-4927-8a9a-28df83640ac4\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.425309 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-config\") pod \"0aa749a9-f668-4927-8a9a-28df83640ac4\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.425333 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-scripts\") pod \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.426981 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-logs" (OuterVolumeSpecName: "logs") pod "ade806c4-9da8-4204-b97b-35f0d84ffeb6" (UID: "ade806c4-9da8-4204-b97b-35f0d84ffeb6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.429084 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ade806c4-9da8-4204-b97b-35f0d84ffeb6" (UID: "ade806c4-9da8-4204-b97b-35f0d84ffeb6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.429826 4761 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.429852 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.448630 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade806c4-9da8-4204-b97b-35f0d84ffeb6-kube-api-access-t2k6h" (OuterVolumeSpecName: "kube-api-access-t2k6h") pod "ade806c4-9da8-4204-b97b-35f0d84ffeb6" (UID: "ade806c4-9da8-4204-b97b-35f0d84ffeb6"). InnerVolumeSpecName "kube-api-access-t2k6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.457745 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-scripts" (OuterVolumeSpecName: "scripts") pod "ade806c4-9da8-4204-b97b-35f0d84ffeb6" (UID: "ade806c4-9da8-4204-b97b-35f0d84ffeb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.457905 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aa749a9-f668-4927-8a9a-28df83640ac4-kube-api-access-xdhth" (OuterVolumeSpecName: "kube-api-access-xdhth") pod "0aa749a9-f668-4927-8a9a-28df83640ac4" (UID: "0aa749a9-f668-4927-8a9a-28df83640ac4"). InnerVolumeSpecName "kube-api-access-xdhth". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.535116 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdhth\" (UniqueName: \"kubernetes.io/projected/0aa749a9-f668-4927-8a9a-28df83640ac4-kube-api-access-xdhth\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.535151 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.535162 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2k6h\" (UniqueName: \"kubernetes.io/projected/ade806c4-9da8-4204-b97b-35f0d84ffeb6-kube-api-access-t2k6h\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.538001 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1" (OuterVolumeSpecName: "glance") pod "ade806c4-9da8-4204-b97b-35f0d84ffeb6" (UID: "ade806c4-9da8-4204-b97b-35f0d84ffeb6"). InnerVolumeSpecName "pvc-652e25a4-1797-4881-8c1b-50f95fd356e1". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.566566 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.595346 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547852-bt6bz"] Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.615459 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ade806c4-9da8-4204-b97b-35f0d84ffeb6" (UID: "ade806c4-9da8-4204-b97b-35f0d84ffeb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.636760 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.636808 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") on node \"crc\" " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.657221 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0aa749a9-f668-4927-8a9a-28df83640ac4" (UID: "0aa749a9-f668-4927-8a9a-28df83640ac4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.667511 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-config" (OuterVolumeSpecName: "config") pod "0aa749a9-f668-4927-8a9a-28df83640ac4" (UID: "0aa749a9-f668-4927-8a9a-28df83640ac4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.670456 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-config-data" (OuterVolumeSpecName: "config-data") pod "ade806c4-9da8-4204-b97b-35f0d84ffeb6" (UID: "ade806c4-9da8-4204-b97b-35f0d84ffeb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.671995 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.672128 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-652e25a4-1797-4881-8c1b-50f95fd356e1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1") on node "crc" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.712988 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mb4ct"] Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.728102 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.739207 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.739332 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.739440 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.739497 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.748010 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kwf9k" event={"ID":"1302a491-8b5e-4d96-a192-ae81c6396870","Type":"ContainerStarted","Data":"f9d5ffeebc50db6db5ddcbc389945c33747c9e0d2dcc1353c4f6cd5238374d8b"} Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.750095 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wnsq8" event={"ID":"9b3dba79-45f7-4154-9691-fa333ba6ad0d","Type":"ContainerStarted","Data":"c28cc09420ea2ac493abf8f06587bcec5b390f6464161eeca9b61f712c64b3e1"} Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.753336 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mb4ct" event={"ID":"30f40316-2c99-4892-b3c5-9e3e61f05212","Type":"ContainerStarted","Data":"d9efe699d03c708c25907fd28a3ef6cee4fbd98319c0cc281fcb0984b34edbfd"} Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.755063 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vthx6" event={"ID":"0aa749a9-f668-4927-8a9a-28df83640ac4","Type":"ContainerDied","Data":"3804d71548a0e319d2c143c8d2c80e16b33d66372af96bec47ce524515f0bd80"} Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.755092 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3804d71548a0e319d2c143c8d2c80e16b33d66372af96bec47ce524515f0bd80" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.755145 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vthx6" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.762898 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"884da56902d61ce2a23842311611c1facb0e638b212880b855a9c7825ef51b45"} Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.767221 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" event={"ID":"dd21ae8c-0b60-48ed-b287-3f861535b5d6","Type":"ContainerStarted","Data":"c962988cd7d3d053113bbb0219170389def9481368a2418da590331f5dcd14c1"} Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.769532 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ade806c4-9da8-4204-b97b-35f0d84ffeb6","Type":"ContainerDied","Data":"2b2bed0c69200cc45c2dbf10fa19c8940ea196711052d4b23aa0df44bce1ab2e"} Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.769638 4761 scope.go:117] "RemoveContainer" containerID="d0d15d62ed51411748f8b5cd631263da5c641011d92d42e6b0ec72249a793814" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.769841 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.776422 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerStarted","Data":"43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f"} Mar 07 08:12:12 crc kubenswrapper[4761]: E0307 08:12:12.777554 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-d9psc" podUID="782631b9-e01d-424c-af31-3471bfdf1587" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.780436 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-kwf9k" podStartSLOduration=2.482471115 podStartE2EDuration="36.780415842s" podCreationTimestamp="2026-03-07 08:11:36 +0000 UTC" firstStartedPulling="2026-03-07 08:11:37.864473337 +0000 UTC m=+1354.773639812" lastFinishedPulling="2026-03-07 08:12:12.162418064 +0000 UTC m=+1389.071584539" observedRunningTime="2026-03-07 08:12:12.765146927 +0000 UTC m=+1389.674313402" watchObservedRunningTime="2026-03-07 08:12:12.780415842 +0000 UTC m=+1389.689582317" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.793759 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-wnsq8" podStartSLOduration=3.385959172 podStartE2EDuration="37.793738688s" podCreationTimestamp="2026-03-07 08:11:35 +0000 UTC" firstStartedPulling="2026-03-07 08:11:37.819911925 +0000 UTC m=+1354.729078400" lastFinishedPulling="2026-03-07 08:12:12.227691441 +0000 UTC m=+1389.136857916" observedRunningTime="2026-03-07 08:12:12.78744584 +0000 UTC m=+1389.696612335" watchObservedRunningTime="2026-03-07 08:12:12.793738688 +0000 UTC m=+1389.702905163" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.817862 4761 scope.go:117] "RemoveContainer" containerID="49f77926d06e2e04414379285e8dc0cfdc7b0497e02b0d16e01e837153f8b1c6" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.938535 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.964526 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.979499 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:12:12 crc kubenswrapper[4761]: E0307 08:12:12.980141 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa749a9-f668-4927-8a9a-28df83640ac4" containerName="neutron-db-sync" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980161 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa749a9-f668-4927-8a9a-28df83640ac4" containerName="neutron-db-sync" Mar 07 08:12:12 crc kubenswrapper[4761]: E0307 08:12:12.980171 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980178 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" Mar 07 08:12:12 crc kubenswrapper[4761]: E0307 08:12:12.980188 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="init" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980194 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="init" Mar 07 08:12:12 crc kubenswrapper[4761]: E0307 08:12:12.980204 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-httpd" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980210 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-httpd" Mar 07 08:12:12 crc kubenswrapper[4761]: E0307 08:12:12.980226 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-log" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980231 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-log" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980473 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-httpd" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980493 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-log" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980513 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa749a9-f668-4927-8a9a-28df83640ac4" containerName="neutron-db-sync" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980525 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.981691 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.986705 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.986944 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.991224 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147109 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147345 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147416 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-logs\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147435 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-config-data\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147474 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-scripts\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147494 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwg2n\" (UniqueName: \"kubernetes.io/projected/05b0e93e-5cbe-4e36-ada4-ff90ea710789-kube-api-access-rwg2n\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147709 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147924 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.250389 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.250936 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.250979 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.251791 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.251919 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-logs\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.251946 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-config-data\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.252517 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-logs\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.252644 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-scripts\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.252675 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwg2n\" (UniqueName: \"kubernetes.io/projected/05b0e93e-5cbe-4e36-ada4-ff90ea710789-kube-api-access-rwg2n\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.253045 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.258584 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-config-data\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.261360 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-scripts\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.263475 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.265519 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.277337 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwg2n\" (UniqueName: \"kubernetes.io/projected/05b0e93e-5cbe-4e36-ada4-ff90ea710789-kube-api-access-rwg2n\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.278278 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.278318 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f1ce1c096842b9627111c5f89fad26fafb9d1f61d1f48c8efc1ee653de0d59a3/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.328462 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.345348 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.629851 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2dmg9"] Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.632186 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.648892 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2dmg9"] Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.682184 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-57b6497888-fkqsr"] Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.684627 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.687374 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.687642 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.687706 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.688580 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-42xrl" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.700708 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57b6497888-fkqsr"] Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.740555 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" path="/var/lib/kubelet/pods/ade806c4-9da8-4204-b97b-35f0d84ffeb6/volumes" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789367 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-httpd-config\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789501 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-combined-ca-bundle\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789538 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zs4h\" (UniqueName: \"kubernetes.io/projected/1feced41-f55d-41bf-a1fb-3c49a768ea5b-kube-api-access-5zs4h\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789669 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-svc\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789702 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789789 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-config\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789864 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789908 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvwj5\" (UniqueName: \"kubernetes.io/projected/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-kube-api-access-dvwj5\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789936 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.790008 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-config\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.790031 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-ovndb-tls-certs\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.825825 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93d41018-801a-4081-8e8f-5f8809cb0e41","Type":"ContainerStarted","Data":"30878be95b336baeb8868c773089d005aabb485de4ba6b2468e6e96919b13eec"} Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.825998 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-log" containerID="cri-o://f3ab7c6d5a03f429467b96c88afc5ca3cbd8e769a0f3e529010526f7deb8dcae" gracePeriod=30 Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.826887 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-httpd" containerID="cri-o://30878be95b336baeb8868c773089d005aabb485de4ba6b2468e6e96919b13eec" gracePeriod=30 Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.853805 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=34.853785983 podStartE2EDuration="34.853785983s" podCreationTimestamp="2026-03-07 08:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:13.849468024 +0000 UTC m=+1390.758634499" watchObservedRunningTime="2026-03-07 08:12:13.853785983 +0000 UTC m=+1390.762952458" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.863816 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mb4ct" event={"ID":"30f40316-2c99-4892-b3c5-9e3e61f05212","Type":"ContainerStarted","Data":"72c5aef6ae252c2f4b34e163aee65c7757addb3a89f37b5d72863ebaa2775b47"} Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.893244 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.893304 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvwj5\" (UniqueName: \"kubernetes.io/projected/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-kube-api-access-dvwj5\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.893323 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.893363 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-config\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.893406 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-ovndb-tls-certs\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.893545 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-httpd-config\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.894859 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-combined-ca-bundle\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.894922 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zs4h\" (UniqueName: \"kubernetes.io/projected/1feced41-f55d-41bf-a1fb-3c49a768ea5b-kube-api-access-5zs4h\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.894929 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.895023 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-svc\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.895099 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.895173 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-config\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.894284 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.896084 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-svc\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.896561 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.897106 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-config\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.899054 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-httpd-config\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.906664 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-config\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.909764 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-combined-ca-bundle\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.911959 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-ovndb-tls-certs\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.912051 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mb4ct" podStartSLOduration=26.912023483 podStartE2EDuration="26.912023483s" podCreationTimestamp="2026-03-07 08:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:13.888086359 +0000 UTC m=+1390.797252854" watchObservedRunningTime="2026-03-07 08:12:13.912023483 +0000 UTC m=+1390.821189958" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.915375 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvwj5\" (UniqueName: \"kubernetes.io/projected/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-kube-api-access-dvwj5\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.926200 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zs4h\" (UniqueName: \"kubernetes.io/projected/1feced41-f55d-41bf-a1fb-3c49a768ea5b-kube-api-access-5zs4h\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.971673 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.056272 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.184993 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.626971 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2dmg9"] Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.941897 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" event={"ID":"dd21ae8c-0b60-48ed-b287-3f861535b5d6","Type":"ContainerStarted","Data":"d5c3fbc73137202537359f63da2e062c34122ec37fea57f7f56fe096047b762b"} Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.971749 4761 generic.go:334] "Generic (PLEG): container finished" podID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerID="30878be95b336baeb8868c773089d005aabb485de4ba6b2468e6e96919b13eec" exitCode=0 Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.971780 4761 generic.go:334] "Generic (PLEG): container finished" podID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerID="f3ab7c6d5a03f429467b96c88afc5ca3cbd8e769a0f3e529010526f7deb8dcae" exitCode=143 Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.971816 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93d41018-801a-4081-8e8f-5f8809cb0e41","Type":"ContainerDied","Data":"30878be95b336baeb8868c773089d005aabb485de4ba6b2468e6e96919b13eec"} Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.971840 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93d41018-801a-4081-8e8f-5f8809cb0e41","Type":"ContainerDied","Data":"f3ab7c6d5a03f429467b96c88afc5ca3cbd8e769a0f3e529010526f7deb8dcae"} Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.997535 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05b0e93e-5cbe-4e36-ada4-ff90ea710789","Type":"ContainerStarted","Data":"5f99fe4eaa0d6654572f8474c020d7e045645f945574566ab31bfb408d79ce3e"} Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.008237 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57b6497888-fkqsr"] Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.013311 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" event={"ID":"1feced41-f55d-41bf-a1fb-3c49a768ea5b","Type":"ContainerStarted","Data":"9e9adff463c65d7c6bb0ccc48d5be6576530813a03c5a123454224aeb14c06bf"} Mar 07 08:12:15 crc kubenswrapper[4761]: W0307 08:12:15.014726 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf50e645a_ba6c_49d5_95a9_3d60c78a1c8a.slice/crio-fce39649108dd6c35261761b0b230664523883842506b7bc99ece68767f72a5f WatchSource:0}: Error finding container fce39649108dd6c35261761b0b230664523883842506b7bc99ece68767f72a5f: Status 404 returned error can't find the container with id fce39649108dd6c35261761b0b230664523883842506b7bc99ece68767f72a5f Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.188753 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.252580 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-combined-ca-bundle\") pod \"93d41018-801a-4081-8e8f-5f8809cb0e41\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.252666 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-httpd-run\") pod \"93d41018-801a-4081-8e8f-5f8809cb0e41\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.252835 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-config-data\") pod \"93d41018-801a-4081-8e8f-5f8809cb0e41\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.252894 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-logs\") pod \"93d41018-801a-4081-8e8f-5f8809cb0e41\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.253048 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"93d41018-801a-4081-8e8f-5f8809cb0e41\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.253091 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-scripts\") pod \"93d41018-801a-4081-8e8f-5f8809cb0e41\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.253119 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqplz\" (UniqueName: \"kubernetes.io/projected/93d41018-801a-4081-8e8f-5f8809cb0e41-kube-api-access-pqplz\") pod \"93d41018-801a-4081-8e8f-5f8809cb0e41\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.254312 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-logs" (OuterVolumeSpecName: "logs") pod "93d41018-801a-4081-8e8f-5f8809cb0e41" (UID: "93d41018-801a-4081-8e8f-5f8809cb0e41"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.254982 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "93d41018-801a-4081-8e8f-5f8809cb0e41" (UID: "93d41018-801a-4081-8e8f-5f8809cb0e41"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.291659 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-scripts" (OuterVolumeSpecName: "scripts") pod "93d41018-801a-4081-8e8f-5f8809cb0e41" (UID: "93d41018-801a-4081-8e8f-5f8809cb0e41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.298969 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d41018-801a-4081-8e8f-5f8809cb0e41-kube-api-access-pqplz" (OuterVolumeSpecName: "kube-api-access-pqplz") pod "93d41018-801a-4081-8e8f-5f8809cb0e41" (UID: "93d41018-801a-4081-8e8f-5f8809cb0e41"). InnerVolumeSpecName "kube-api-access-pqplz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.355337 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.355367 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.355376 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqplz\" (UniqueName: \"kubernetes.io/projected/93d41018-801a-4081-8e8f-5f8809cb0e41-kube-api-access-pqplz\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.355385 4761 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.461110 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b" (OuterVolumeSpecName: "glance") pod "93d41018-801a-4081-8e8f-5f8809cb0e41" (UID: "93d41018-801a-4081-8e8f-5f8809cb0e41"). InnerVolumeSpecName "pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.506595 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-config-data" (OuterVolumeSpecName: "config-data") pod "93d41018-801a-4081-8e8f-5f8809cb0e41" (UID: "93d41018-801a-4081-8e8f-5f8809cb0e41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.513850 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93d41018-801a-4081-8e8f-5f8809cb0e41" (UID: "93d41018-801a-4081-8e8f-5f8809cb0e41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.561442 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.561644 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") on node \"crc\" " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.561667 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.610026 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.610221 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b") on node "crc" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.663697 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.028432 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05b0e93e-5cbe-4e36-ada4-ff90ea710789","Type":"ContainerStarted","Data":"b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84"} Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.032683 4761 generic.go:334] "Generic (PLEG): container finished" podID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerID="0d093c0692b1a14616aa39efb24254b44c88f721100e0fd4189d8017719b5052" exitCode=0 Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.032768 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" event={"ID":"1feced41-f55d-41bf-a1fb-3c49a768ea5b","Type":"ContainerDied","Data":"0d093c0692b1a14616aa39efb24254b44c88f721100e0fd4189d8017719b5052"} Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.040896 4761 generic.go:334] "Generic (PLEG): container finished" podID="dd21ae8c-0b60-48ed-b287-3f861535b5d6" containerID="d5c3fbc73137202537359f63da2e062c34122ec37fea57f7f56fe096047b762b" exitCode=0 Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.040957 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" event={"ID":"dd21ae8c-0b60-48ed-b287-3f861535b5d6","Type":"ContainerDied","Data":"d5c3fbc73137202537359f63da2e062c34122ec37fea57f7f56fe096047b762b"} Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.060777 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93d41018-801a-4081-8e8f-5f8809cb0e41","Type":"ContainerDied","Data":"4eaf66b670ef081e714a15a16a15827499d9c0af073870562be41b7d510fef4e"} Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.060828 4761 scope.go:117] "RemoveContainer" containerID="30878be95b336baeb8868c773089d005aabb485de4ba6b2468e6e96919b13eec" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.061003 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.068500 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57b6497888-fkqsr" event={"ID":"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a","Type":"ContainerStarted","Data":"fce39649108dd6c35261761b0b230664523883842506b7bc99ece68767f72a5f"} Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.379838 4761 scope.go:117] "RemoveContainer" containerID="f3ab7c6d5a03f429467b96c88afc5ca3cbd8e769a0f3e529010526f7deb8dcae" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.409269 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.423410 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.492888 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:12:16 crc kubenswrapper[4761]: E0307 08:12:16.493434 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-httpd" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.493451 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-httpd" Mar 07 08:12:16 crc kubenswrapper[4761]: E0307 08:12:16.493488 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-log" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.493496 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-log" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.493783 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-log" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.493801 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-httpd" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.494924 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.506613 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.506851 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.542422 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.554952 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-795c9dd6fc-kqgf4"] Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.557019 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.561041 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.561313 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.566985 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-795c9dd6fc-kqgf4"] Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.686550 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.699880 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-logs\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.699949 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701587 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701651 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpkhv\" (UniqueName: \"kubernetes.io/projected/a592362d-7e1a-4be8-9dc7-84ee7a6170db-kube-api-access-qpkhv\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701677 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-combined-ca-bundle\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701703 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-config\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701750 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701811 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-httpd-config\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701849 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701868 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-public-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701906 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-internal-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701948 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.702001 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvnq\" (UniqueName: \"kubernetes.io/projected/e27c72db-fb0c-4db5-965c-2f859f151114-kube-api-access-jnvnq\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.702049 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-ovndb-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.702078 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806023 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jb8q\" (UniqueName: \"kubernetes.io/projected/dd21ae8c-0b60-48ed-b287-3f861535b5d6-kube-api-access-8jb8q\") pod \"dd21ae8c-0b60-48ed-b287-3f861535b5d6\" (UID: \"dd21ae8c-0b60-48ed-b287-3f861535b5d6\") " Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806370 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-httpd-config\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806426 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806444 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-public-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806498 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-internal-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806533 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806574 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnvnq\" (UniqueName: \"kubernetes.io/projected/e27c72db-fb0c-4db5-965c-2f859f151114-kube-api-access-jnvnq\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806609 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-ovndb-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806637 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806669 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-logs\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806687 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806740 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.807171 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpkhv\" (UniqueName: \"kubernetes.io/projected/a592362d-7e1a-4be8-9dc7-84ee7a6170db-kube-api-access-qpkhv\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.807215 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-combined-ca-bundle\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.807242 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-config\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.807272 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.808974 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-logs\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.812704 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-internal-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.814625 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.814673 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/851ce73d1b192d58f34aae6f8e819bd73d3fa6a2538f169362f333663b0c473e/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.815024 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-ovndb-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.822396 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.822497 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.822477 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.822984 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd21ae8c-0b60-48ed-b287-3f861535b5d6-kube-api-access-8jb8q" (OuterVolumeSpecName: "kube-api-access-8jb8q") pod "dd21ae8c-0b60-48ed-b287-3f861535b5d6" (UID: "dd21ae8c-0b60-48ed-b287-3f861535b5d6"). InnerVolumeSpecName "kube-api-access-8jb8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.823589 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.824294 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-public-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.824524 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-httpd-config\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.824733 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-combined-ca-bundle\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.825170 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.827899 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnvnq\" (UniqueName: \"kubernetes.io/projected/e27c72db-fb0c-4db5-965c-2f859f151114-kube-api-access-jnvnq\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.831670 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpkhv\" (UniqueName: \"kubernetes.io/projected/a592362d-7e1a-4be8-9dc7-84ee7a6170db-kube-api-access-qpkhv\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.833972 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-config\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.876769 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.910194 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jb8q\" (UniqueName: \"kubernetes.io/projected/dd21ae8c-0b60-48ed-b287-3f861535b5d6-kube-api-access-8jb8q\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.972218 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.082510 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerStarted","Data":"d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b"} Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.099276 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57b6497888-fkqsr" event={"ID":"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a","Type":"ContainerStarted","Data":"4543315a954e687de718677391705bbdbd0406d681cb87f746858f8b56f4bc7b"} Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.099337 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57b6497888-fkqsr" event={"ID":"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a","Type":"ContainerStarted","Data":"08a531bbea56745d96bf5808403f6dcf83e2a3f8d100ed5e64c06f8c0c91449a"} Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.101467 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.112027 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05b0e93e-5cbe-4e36-ada4-ff90ea710789","Type":"ContainerStarted","Data":"18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20"} Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.122389 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-57b6497888-fkqsr" podStartSLOduration=4.122374613 podStartE2EDuration="4.122374613s" podCreationTimestamp="2026-03-07 08:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:17.120532536 +0000 UTC m=+1394.029699011" watchObservedRunningTime="2026-03-07 08:12:17.122374613 +0000 UTC m=+1394.031541088" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.135539 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.136233 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" event={"ID":"1feced41-f55d-41bf-a1fb-3c49a768ea5b","Type":"ContainerStarted","Data":"8770483a09bf6a7b3c50c01184e37a888d5d93e3afa587afe6190ed3256c62ff"} Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.137170 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.142240 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" event={"ID":"dd21ae8c-0b60-48ed-b287-3f861535b5d6","Type":"ContainerDied","Data":"c962988cd7d3d053113bbb0219170389def9481368a2418da590331f5dcd14c1"} Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.142283 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c962988cd7d3d053113bbb0219170389def9481368a2418da590331f5dcd14c1" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.142334 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.168851 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.168829305 podStartE2EDuration="5.168829305s" podCreationTimestamp="2026-03-07 08:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:17.151958609 +0000 UTC m=+1394.061125084" watchObservedRunningTime="2026-03-07 08:12:17.168829305 +0000 UTC m=+1394.077995780" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.179508 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" podStartSLOduration=4.179493004 podStartE2EDuration="4.179493004s" podCreationTimestamp="2026-03-07 08:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:17.179029163 +0000 UTC m=+1394.088195638" watchObservedRunningTime="2026-03-07 08:12:17.179493004 +0000 UTC m=+1394.088659469" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.717152 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" path="/var/lib/kubelet/pods/93d41018-801a-4081-8e8f-5f8809cb0e41/volumes" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.791781 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547846-tz9jt"] Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.804584 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547846-tz9jt"] Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.819375 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-795c9dd6fc-kqgf4"] Mar 07 08:12:17 crc kubenswrapper[4761]: W0307 08:12:17.820077 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode27c72db_fb0c_4db5_965c_2f859f151114.slice/crio-b49dc3f330e56a74796d2982a561612c7903bc1e0336d6b59fb45d5b704fb1e9 WatchSource:0}: Error finding container b49dc3f330e56a74796d2982a561612c7903bc1e0336d6b59fb45d5b704fb1e9: Status 404 returned error can't find the container with id b49dc3f330e56a74796d2982a561612c7903bc1e0336d6b59fb45d5b704fb1e9 Mar 07 08:12:18 crc kubenswrapper[4761]: I0307 08:12:18.177421 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795c9dd6fc-kqgf4" event={"ID":"e27c72db-fb0c-4db5-965c-2f859f151114","Type":"ContainerStarted","Data":"7bda07256ee2627429245d18d5649b3657d12f6cebca25a531829eea2aa0e074"} Mar 07 08:12:18 crc kubenswrapper[4761]: I0307 08:12:18.178165 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795c9dd6fc-kqgf4" event={"ID":"e27c72db-fb0c-4db5-965c-2f859f151114","Type":"ContainerStarted","Data":"b49dc3f330e56a74796d2982a561612c7903bc1e0336d6b59fb45d5b704fb1e9"} Mar 07 08:12:18 crc kubenswrapper[4761]: I0307 08:12:18.660013 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:12:19 crc kubenswrapper[4761]: I0307 08:12:19.206024 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a592362d-7e1a-4be8-9dc7-84ee7a6170db","Type":"ContainerStarted","Data":"971da60208d3b6ab528e27a23204c4e439302fa13aa18c215aa3e84d3072a45f"} Mar 07 08:12:19 crc kubenswrapper[4761]: I0307 08:12:19.211036 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795c9dd6fc-kqgf4" event={"ID":"e27c72db-fb0c-4db5-965c-2f859f151114","Type":"ContainerStarted","Data":"f0f3124d8f6910b941dc6607e892a98ae3067b4ce30a70b32703105114946abc"} Mar 07 08:12:19 crc kubenswrapper[4761]: I0307 08:12:19.211724 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:19 crc kubenswrapper[4761]: I0307 08:12:19.242768 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-795c9dd6fc-kqgf4" podStartSLOduration=3.2427381 podStartE2EDuration="3.2427381s" podCreationTimestamp="2026-03-07 08:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:19.235632801 +0000 UTC m=+1396.144799286" watchObservedRunningTime="2026-03-07 08:12:19.2427381 +0000 UTC m=+1396.151904575" Mar 07 08:12:19 crc kubenswrapper[4761]: I0307 08:12:19.737938 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3231b68-1f7c-4c26-b4c8-887862d28e06" path="/var/lib/kubelet/pods/c3231b68-1f7c-4c26-b4c8-887862d28e06/volumes" Mar 07 08:12:20 crc kubenswrapper[4761]: I0307 08:12:20.224434 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a592362d-7e1a-4be8-9dc7-84ee7a6170db","Type":"ContainerStarted","Data":"a66fd390bbf68a3e4ff357dfdc728b5dbac9c698af22e5a0692112931c9003d1"} Mar 07 08:12:20 crc kubenswrapper[4761]: I0307 08:12:20.225705 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a592362d-7e1a-4be8-9dc7-84ee7a6170db","Type":"ContainerStarted","Data":"f0fc1f72a75b1539d68a0602b27e008732b428b7fe5f595bd67a5269690ae4c1"} Mar 07 08:12:20 crc kubenswrapper[4761]: I0307 08:12:20.243421 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.243408427 podStartE2EDuration="4.243408427s" podCreationTimestamp="2026-03-07 08:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:20.241395466 +0000 UTC m=+1397.150561941" watchObservedRunningTime="2026-03-07 08:12:20.243408427 +0000 UTC m=+1397.152574902" Mar 07 08:12:21 crc kubenswrapper[4761]: I0307 08:12:21.241908 4761 generic.go:334] "Generic (PLEG): container finished" podID="1302a491-8b5e-4d96-a192-ae81c6396870" containerID="f9d5ffeebc50db6db5ddcbc389945c33747c9e0d2dcc1353c4f6cd5238374d8b" exitCode=0 Mar 07 08:12:21 crc kubenswrapper[4761]: I0307 08:12:21.242001 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kwf9k" event={"ID":"1302a491-8b5e-4d96-a192-ae81c6396870","Type":"ContainerDied","Data":"f9d5ffeebc50db6db5ddcbc389945c33747c9e0d2dcc1353c4f6cd5238374d8b"} Mar 07 08:12:23 crc kubenswrapper[4761]: I0307 08:12:23.345567 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 07 08:12:23 crc kubenswrapper[4761]: I0307 08:12:23.345925 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 07 08:12:23 crc kubenswrapper[4761]: I0307 08:12:23.399218 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 07 08:12:23 crc kubenswrapper[4761]: I0307 08:12:23.401941 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 07 08:12:23 crc kubenswrapper[4761]: I0307 08:12:23.972952 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:24 crc kubenswrapper[4761]: I0307 08:12:24.082297 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fmpdp"] Mar 07 08:12:24 crc kubenswrapper[4761]: I0307 08:12:24.082642 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" podUID="538ded96-3415-417f-8b82-5e29c85bf943" containerName="dnsmasq-dns" containerID="cri-o://4731ffe1012d05ac8d1c43ca1ea7417657ab91649645937ec299cefb6cbc4e8c" gracePeriod=10 Mar 07 08:12:24 crc kubenswrapper[4761]: I0307 08:12:24.274935 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 08:12:24 crc kubenswrapper[4761]: I0307 08:12:24.274987 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 08:12:24 crc kubenswrapper[4761]: I0307 08:12:24.629572 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" podUID="538ded96-3415-417f-8b82-5e29c85bf943" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.192:5353: connect: connection refused" Mar 07 08:12:25 crc kubenswrapper[4761]: I0307 08:12:25.295589 4761 generic.go:334] "Generic (PLEG): container finished" podID="538ded96-3415-417f-8b82-5e29c85bf943" containerID="4731ffe1012d05ac8d1c43ca1ea7417657ab91649645937ec299cefb6cbc4e8c" exitCode=0 Mar 07 08:12:25 crc kubenswrapper[4761]: I0307 08:12:25.295751 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" event={"ID":"538ded96-3415-417f-8b82-5e29c85bf943","Type":"ContainerDied","Data":"4731ffe1012d05ac8d1c43ca1ea7417657ab91649645937ec299cefb6cbc4e8c"} Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.306322 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.306550 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.855850 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kwf9k" Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.893042 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-config-data\") pod \"1302a491-8b5e-4d96-a192-ae81c6396870\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.893136 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-combined-ca-bundle\") pod \"1302a491-8b5e-4d96-a192-ae81c6396870\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.893206 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml5bj\" (UniqueName: \"kubernetes.io/projected/1302a491-8b5e-4d96-a192-ae81c6396870-kube-api-access-ml5bj\") pod \"1302a491-8b5e-4d96-a192-ae81c6396870\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.893258 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-scripts\") pod \"1302a491-8b5e-4d96-a192-ae81c6396870\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.894298 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1302a491-8b5e-4d96-a192-ae81c6396870-logs\") pod \"1302a491-8b5e-4d96-a192-ae81c6396870\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.895915 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1302a491-8b5e-4d96-a192-ae81c6396870-logs" (OuterVolumeSpecName: "logs") pod "1302a491-8b5e-4d96-a192-ae81c6396870" (UID: "1302a491-8b5e-4d96-a192-ae81c6396870"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.901036 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1302a491-8b5e-4d96-a192-ae81c6396870-kube-api-access-ml5bj" (OuterVolumeSpecName: "kube-api-access-ml5bj") pod "1302a491-8b5e-4d96-a192-ae81c6396870" (UID: "1302a491-8b5e-4d96-a192-ae81c6396870"). InnerVolumeSpecName "kube-api-access-ml5bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.903362 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-scripts" (OuterVolumeSpecName: "scripts") pod "1302a491-8b5e-4d96-a192-ae81c6396870" (UID: "1302a491-8b5e-4d96-a192-ae81c6396870"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.928480 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-config-data" (OuterVolumeSpecName: "config-data") pod "1302a491-8b5e-4d96-a192-ae81c6396870" (UID: "1302a491-8b5e-4d96-a192-ae81c6396870"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.958182 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1302a491-8b5e-4d96-a192-ae81c6396870" (UID: "1302a491-8b5e-4d96-a192-ae81c6396870"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.004782 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1302a491-8b5e-4d96-a192-ae81c6396870-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.005059 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.005195 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.005347 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml5bj\" (UniqueName: \"kubernetes.io/projected/1302a491-8b5e-4d96-a192-ae81c6396870-kube-api-access-ml5bj\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.005493 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.136548 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.136605 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.186025 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.193562 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.327349 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kwf9k" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.327490 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kwf9k" event={"ID":"1302a491-8b5e-4d96-a192-ae81c6396870","Type":"ContainerDied","Data":"37069611aa3b30f5ad29c74502df3567823a99c10fc10de76b428ece21310540"} Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.327819 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37069611aa3b30f5ad29c74502df3567823a99c10fc10de76b428ece21310540" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.327842 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.327856 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.509502 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.623960 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zknfr\" (UniqueName: \"kubernetes.io/projected/538ded96-3415-417f-8b82-5e29c85bf943-kube-api-access-zknfr\") pod \"538ded96-3415-417f-8b82-5e29c85bf943\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.624134 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-config\") pod \"538ded96-3415-417f-8b82-5e29c85bf943\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.624208 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-nb\") pod \"538ded96-3415-417f-8b82-5e29c85bf943\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.624258 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-swift-storage-0\") pod \"538ded96-3415-417f-8b82-5e29c85bf943\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.624318 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-svc\") pod \"538ded96-3415-417f-8b82-5e29c85bf943\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.624401 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-sb\") pod \"538ded96-3415-417f-8b82-5e29c85bf943\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.634955 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/538ded96-3415-417f-8b82-5e29c85bf943-kube-api-access-zknfr" (OuterVolumeSpecName: "kube-api-access-zknfr") pod "538ded96-3415-417f-8b82-5e29c85bf943" (UID: "538ded96-3415-417f-8b82-5e29c85bf943"). InnerVolumeSpecName "kube-api-access-zknfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.689387 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "538ded96-3415-417f-8b82-5e29c85bf943" (UID: "538ded96-3415-417f-8b82-5e29c85bf943"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.702100 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "538ded96-3415-417f-8b82-5e29c85bf943" (UID: "538ded96-3415-417f-8b82-5e29c85bf943"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.703688 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "538ded96-3415-417f-8b82-5e29c85bf943" (UID: "538ded96-3415-417f-8b82-5e29c85bf943"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.705414 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "538ded96-3415-417f-8b82-5e29c85bf943" (UID: "538ded96-3415-417f-8b82-5e29c85bf943"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.709628 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-config" (OuterVolumeSpecName: "config") pod "538ded96-3415-417f-8b82-5e29c85bf943" (UID: "538ded96-3415-417f-8b82-5e29c85bf943"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.726476 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.726504 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zknfr\" (UniqueName: \"kubernetes.io/projected/538ded96-3415-417f-8b82-5e29c85bf943-kube-api-access-zknfr\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.726517 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.726525 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.726534 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.726542 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.114391 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-548cccfb88-8f8gk"] Mar 07 08:12:28 crc kubenswrapper[4761]: E0307 08:12:28.115096 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="538ded96-3415-417f-8b82-5e29c85bf943" containerName="dnsmasq-dns" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.115114 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="538ded96-3415-417f-8b82-5e29c85bf943" containerName="dnsmasq-dns" Mar 07 08:12:28 crc kubenswrapper[4761]: E0307 08:12:28.115131 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="538ded96-3415-417f-8b82-5e29c85bf943" containerName="init" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.115137 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="538ded96-3415-417f-8b82-5e29c85bf943" containerName="init" Mar 07 08:12:28 crc kubenswrapper[4761]: E0307 08:12:28.115146 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1302a491-8b5e-4d96-a192-ae81c6396870" containerName="placement-db-sync" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.115152 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1302a491-8b5e-4d96-a192-ae81c6396870" containerName="placement-db-sync" Mar 07 08:12:28 crc kubenswrapper[4761]: E0307 08:12:28.115170 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd21ae8c-0b60-48ed-b287-3f861535b5d6" containerName="oc" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.115176 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd21ae8c-0b60-48ed-b287-3f861535b5d6" containerName="oc" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.115402 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="538ded96-3415-417f-8b82-5e29c85bf943" containerName="dnsmasq-dns" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.115428 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1302a491-8b5e-4d96-a192-ae81c6396870" containerName="placement-db-sync" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.115440 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd21ae8c-0b60-48ed-b287-3f861535b5d6" containerName="oc" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.116480 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.120126 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.120237 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.120305 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.120491 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4cztd" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.120671 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.134524 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-548cccfb88-8f8gk"] Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.204876 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-combined-ca-bundle\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.205201 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk987\" (UniqueName: \"kubernetes.io/projected/befe03c6-a479-47be-a462-d94a93217344-kube-api-access-qk987\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.205353 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-public-tls-certs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.205445 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-scripts\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.205520 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-config-data\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.205624 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befe03c6-a479-47be-a462-d94a93217344-logs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.205784 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-internal-tls-certs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.307649 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-scripts\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.307989 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-config-data\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.308039 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befe03c6-a479-47be-a462-d94a93217344-logs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.308111 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-internal-tls-certs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.308148 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-combined-ca-bundle\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.308235 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk987\" (UniqueName: \"kubernetes.io/projected/befe03c6-a479-47be-a462-d94a93217344-kube-api-access-qk987\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.308292 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-public-tls-certs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.309144 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befe03c6-a479-47be-a462-d94a93217344-logs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.313574 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-scripts\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.314738 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-combined-ca-bundle\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.315320 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-public-tls-certs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.316665 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-config-data\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.328958 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-internal-tls-certs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.348979 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk987\" (UniqueName: \"kubernetes.io/projected/befe03c6-a479-47be-a462-d94a93217344-kube-api-access-qk987\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.358825 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.358838 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" event={"ID":"538ded96-3415-417f-8b82-5e29c85bf943","Type":"ContainerDied","Data":"8b61496bdc0ba9cdbb66cd8ab5e8c4b517098cd8c010b8f4c5a0c0fd26c3cd65"} Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.358904 4761 scope.go:117] "RemoveContainer" containerID="4731ffe1012d05ac8d1c43ca1ea7417657ab91649645937ec299cefb6cbc4e8c" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.437084 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fmpdp"] Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.447557 4761 scope.go:117] "RemoveContainer" containerID="9186130caa21c79ac5e7c5f0448f28d89ff465dd03e9542cb0fa32079fc08ea6" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.448787 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fmpdp"] Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.452503 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: E0307 08:12:28.794917 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30f40316_2c99_4892_b3c5_9e3e61f05212.slice/crio-conmon-72c5aef6ae252c2f4b34e163aee65c7757addb3a89f37b5d72863ebaa2775b47.scope\": RecentStats: unable to find data in memory cache]" Mar 07 08:12:29 crc kubenswrapper[4761]: I0307 08:12:29.182014 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-548cccfb88-8f8gk"] Mar 07 08:12:29 crc kubenswrapper[4761]: I0307 08:12:29.369954 4761 generic.go:334] "Generic (PLEG): container finished" podID="30f40316-2c99-4892-b3c5-9e3e61f05212" containerID="72c5aef6ae252c2f4b34e163aee65c7757addb3a89f37b5d72863ebaa2775b47" exitCode=0 Mar 07 08:12:29 crc kubenswrapper[4761]: I0307 08:12:29.370040 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mb4ct" event={"ID":"30f40316-2c99-4892-b3c5-9e3e61f05212","Type":"ContainerDied","Data":"72c5aef6ae252c2f4b34e163aee65c7757addb3a89f37b5d72863ebaa2775b47"} Mar 07 08:12:29 crc kubenswrapper[4761]: I0307 08:12:29.372220 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:12:29 crc kubenswrapper[4761]: I0307 08:12:29.372237 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:12:29 crc kubenswrapper[4761]: I0307 08:12:29.724211 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="538ded96-3415-417f-8b82-5e29c85bf943" path="/var/lib/kubelet/pods/538ded96-3415-417f-8b82-5e29c85bf943/volumes" Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.384341 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548cccfb88-8f8gk" event={"ID":"befe03c6-a479-47be-a462-d94a93217344","Type":"ContainerStarted","Data":"86c8561318980ddc9b03e998f8c8e8c8ed4238129497411ddba218873461884d"} Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.822973 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.980930 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-scripts\") pod \"30f40316-2c99-4892-b3c5-9e3e61f05212\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.981340 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-fernet-keys\") pod \"30f40316-2c99-4892-b3c5-9e3e61f05212\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.981405 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-credential-keys\") pod \"30f40316-2c99-4892-b3c5-9e3e61f05212\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.981471 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-combined-ca-bundle\") pod \"30f40316-2c99-4892-b3c5-9e3e61f05212\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.981574 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pndwr\" (UniqueName: \"kubernetes.io/projected/30f40316-2c99-4892-b3c5-9e3e61f05212-kube-api-access-pndwr\") pod \"30f40316-2c99-4892-b3c5-9e3e61f05212\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.981606 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-config-data\") pod \"30f40316-2c99-4892-b3c5-9e3e61f05212\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.988250 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "30f40316-2c99-4892-b3c5-9e3e61f05212" (UID: "30f40316-2c99-4892-b3c5-9e3e61f05212"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.993331 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-scripts" (OuterVolumeSpecName: "scripts") pod "30f40316-2c99-4892-b3c5-9e3e61f05212" (UID: "30f40316-2c99-4892-b3c5-9e3e61f05212"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.995559 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "30f40316-2c99-4892-b3c5-9e3e61f05212" (UID: "30f40316-2c99-4892-b3c5-9e3e61f05212"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.995786 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f40316-2c99-4892-b3c5-9e3e61f05212-kube-api-access-pndwr" (OuterVolumeSpecName: "kube-api-access-pndwr") pod "30f40316-2c99-4892-b3c5-9e3e61f05212" (UID: "30f40316-2c99-4892-b3c5-9e3e61f05212"). InnerVolumeSpecName "kube-api-access-pndwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.029998 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-config-data" (OuterVolumeSpecName: "config-data") pod "30f40316-2c99-4892-b3c5-9e3e61f05212" (UID: "30f40316-2c99-4892-b3c5-9e3e61f05212"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.045709 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30f40316-2c99-4892-b3c5-9e3e61f05212" (UID: "30f40316-2c99-4892-b3c5-9e3e61f05212"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.084153 4761 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.084183 4761 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.084194 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.084204 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pndwr\" (UniqueName: \"kubernetes.io/projected/30f40316-2c99-4892-b3c5-9e3e61f05212-kube-api-access-pndwr\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.084212 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.084220 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.416964 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-92qzx" event={"ID":"dce2c706-6c24-4be8-b347-90448de8aaf9","Type":"ContainerStarted","Data":"560fe328c871c1fd36e317523f8415d6e1437c8d786e81f4b10c902c8f0a9573"} Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.421299 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerStarted","Data":"0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294"} Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.424345 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548cccfb88-8f8gk" event={"ID":"befe03c6-a479-47be-a462-d94a93217344","Type":"ContainerStarted","Data":"81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b"} Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.424400 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548cccfb88-8f8gk" event={"ID":"befe03c6-a479-47be-a462-d94a93217344","Type":"ContainerStarted","Data":"9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b"} Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.424517 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.426974 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mb4ct" event={"ID":"30f40316-2c99-4892-b3c5-9e3e61f05212","Type":"ContainerDied","Data":"d9efe699d03c708c25907fd28a3ef6cee4fbd98319c0cc281fcb0984b34edbfd"} Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.427020 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9efe699d03c708c25907fd28a3ef6cee4fbd98319c0cc281fcb0984b34edbfd" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.427084 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.453940 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-92qzx" podStartSLOduration=3.149493964 podStartE2EDuration="56.453915238s" podCreationTimestamp="2026-03-07 08:11:35 +0000 UTC" firstStartedPulling="2026-03-07 08:11:37.207500219 +0000 UTC m=+1354.116666694" lastFinishedPulling="2026-03-07 08:12:30.511921493 +0000 UTC m=+1407.421087968" observedRunningTime="2026-03-07 08:12:31.441374261 +0000 UTC m=+1408.350540736" watchObservedRunningTime="2026-03-07 08:12:31.453915238 +0000 UTC m=+1408.363081713" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.460920 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-548cccfb88-8f8gk" podStartSLOduration=3.460903514 podStartE2EDuration="3.460903514s" podCreationTimestamp="2026-03-07 08:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:31.457446127 +0000 UTC m=+1408.366612602" watchObservedRunningTime="2026-03-07 08:12:31.460903514 +0000 UTC m=+1408.370069989" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.591123 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-668988d5d5-hwhxv"] Mar 07 08:12:31 crc kubenswrapper[4761]: E0307 08:12:31.591603 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f40316-2c99-4892-b3c5-9e3e61f05212" containerName="keystone-bootstrap" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.591620 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f40316-2c99-4892-b3c5-9e3e61f05212" containerName="keystone-bootstrap" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.591868 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f40316-2c99-4892-b3c5-9e3e61f05212" containerName="keystone-bootstrap" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.592575 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.594434 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.599792 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.600448 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pgh8w" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.600868 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.601960 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.602174 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.611434 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-668988d5d5-hwhxv"] Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.697597 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-fernet-keys\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.698028 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-config-data\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.698233 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-credential-keys\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.698379 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-combined-ca-bundle\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.698545 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr57h\" (UniqueName: \"kubernetes.io/projected/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-kube-api-access-vr57h\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.698667 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-public-tls-certs\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.698810 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-internal-tls-certs\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.698919 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-scripts\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.800801 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-credential-keys\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.800912 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-combined-ca-bundle\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.800999 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr57h\" (UniqueName: \"kubernetes.io/projected/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-kube-api-access-vr57h\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.801051 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-public-tls-certs\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.801097 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-internal-tls-certs\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.801144 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-scripts\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.801199 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-fernet-keys\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.801281 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-config-data\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.806513 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-internal-tls-certs\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.809266 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-credential-keys\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.809621 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-combined-ca-bundle\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.810039 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-fernet-keys\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.810146 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-scripts\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.810289 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-public-tls-certs\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.821664 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-config-data\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.841667 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr57h\" (UniqueName: \"kubernetes.io/projected/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-kube-api-access-vr57h\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.910901 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:32 crc kubenswrapper[4761]: I0307 08:12:32.440816 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d9psc" event={"ID":"782631b9-e01d-424c-af31-3471bfdf1587","Type":"ContainerStarted","Data":"a4cceda235cdb340157db8083fb5a763bc0408a1d5edeb08189f027c6a110169"} Mar 07 08:12:32 crc kubenswrapper[4761]: I0307 08:12:32.446276 4761 generic.go:334] "Generic (PLEG): container finished" podID="9b3dba79-45f7-4154-9691-fa333ba6ad0d" containerID="c28cc09420ea2ac493abf8f06587bcec5b390f6464161eeca9b61f712c64b3e1" exitCode=0 Mar 07 08:12:32 crc kubenswrapper[4761]: I0307 08:12:32.446666 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wnsq8" event={"ID":"9b3dba79-45f7-4154-9691-fa333ba6ad0d","Type":"ContainerDied","Data":"c28cc09420ea2ac493abf8f06587bcec5b390f6464161eeca9b61f712c64b3e1"} Mar 07 08:12:32 crc kubenswrapper[4761]: I0307 08:12:32.446880 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:32 crc kubenswrapper[4761]: I0307 08:12:32.482872 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-668988d5d5-hwhxv"] Mar 07 08:12:32 crc kubenswrapper[4761]: W0307 08:12:32.482872 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode467d7ea_5958_4bcc_84b2_4ade3fdb5cc6.slice/crio-fc7bde588baae8573bfe0b7bcc8daaa4b2b7398f27f9ffca187164f9191e39b0 WatchSource:0}: Error finding container fc7bde588baae8573bfe0b7bcc8daaa4b2b7398f27f9ffca187164f9191e39b0: Status 404 returned error can't find the container with id fc7bde588baae8573bfe0b7bcc8daaa4b2b7398f27f9ffca187164f9191e39b0 Mar 07 08:12:32 crc kubenswrapper[4761]: I0307 08:12:32.485571 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-d9psc" podStartSLOduration=4.841394784 podStartE2EDuration="57.485547435s" podCreationTimestamp="2026-03-07 08:11:35 +0000 UTC" firstStartedPulling="2026-03-07 08:11:37.870148552 +0000 UTC m=+1354.779315027" lastFinishedPulling="2026-03-07 08:12:30.514301203 +0000 UTC m=+1407.423467678" observedRunningTime="2026-03-07 08:12:32.477890702 +0000 UTC m=+1409.387057177" watchObservedRunningTime="2026-03-07 08:12:32.485547435 +0000 UTC m=+1409.394713910" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.094399 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-84bcb6db96-7gd85"] Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.097204 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.106495 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-84bcb6db96-7gd85"] Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.231075 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae33121e-ffd0-48c2-b440-384ae5683dce-logs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.231128 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6m57\" (UniqueName: \"kubernetes.io/projected/ae33121e-ffd0-48c2-b440-384ae5683dce-kube-api-access-m6m57\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.231158 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-scripts\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.231179 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-public-tls-certs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.231233 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-internal-tls-certs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.231262 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-combined-ca-bundle\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.231330 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-config-data\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.333323 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-scripts\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.333372 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-public-tls-certs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.333436 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-internal-tls-certs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.333464 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-combined-ca-bundle\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.333533 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-config-data\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.333638 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae33121e-ffd0-48c2-b440-384ae5683dce-logs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.333657 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6m57\" (UniqueName: \"kubernetes.io/projected/ae33121e-ffd0-48c2-b440-384ae5683dce-kube-api-access-m6m57\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.335302 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae33121e-ffd0-48c2-b440-384ae5683dce-logs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.340477 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-scripts\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.340673 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-internal-tls-certs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.341913 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-public-tls-certs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.342906 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-combined-ca-bundle\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.352362 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6m57\" (UniqueName: \"kubernetes.io/projected/ae33121e-ffd0-48c2-b440-384ae5683dce-kube-api-access-m6m57\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.356100 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-config-data\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.430685 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.464969 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-668988d5d5-hwhxv" event={"ID":"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6","Type":"ContainerStarted","Data":"fe57db351d992dbfa0615969a700a0836c82f3208baddf92104e258641ca6cc5"} Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.465017 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-668988d5d5-hwhxv" event={"ID":"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6","Type":"ContainerStarted","Data":"fc7bde588baae8573bfe0b7bcc8daaa4b2b7398f27f9ffca187164f9191e39b0"} Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.465210 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.494592 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-668988d5d5-hwhxv" podStartSLOduration=2.494569983 podStartE2EDuration="2.494569983s" podCreationTimestamp="2026-03-07 08:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:33.490158791 +0000 UTC m=+1410.399325266" watchObservedRunningTime="2026-03-07 08:12:33.494569983 +0000 UTC m=+1410.403736458" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.629965 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.630056 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.660087 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.660183 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.771808 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.979831 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.281329 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-84bcb6db96-7gd85"] Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.291928 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.369658 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-combined-ca-bundle\") pod \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.369835 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-db-sync-config-data\") pod \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.370574 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn7gd\" (UniqueName: \"kubernetes.io/projected/9b3dba79-45f7-4154-9691-fa333ba6ad0d-kube-api-access-wn7gd\") pod \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.381210 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b3dba79-45f7-4154-9691-fa333ba6ad0d-kube-api-access-wn7gd" (OuterVolumeSpecName: "kube-api-access-wn7gd") pod "9b3dba79-45f7-4154-9691-fa333ba6ad0d" (UID: "9b3dba79-45f7-4154-9691-fa333ba6ad0d"). InnerVolumeSpecName "kube-api-access-wn7gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.381854 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9b3dba79-45f7-4154-9691-fa333ba6ad0d" (UID: "9b3dba79-45f7-4154-9691-fa333ba6ad0d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.417879 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b3dba79-45f7-4154-9691-fa333ba6ad0d" (UID: "9b3dba79-45f7-4154-9691-fa333ba6ad0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.476177 4761 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.476224 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn7gd\" (UniqueName: \"kubernetes.io/projected/9b3dba79-45f7-4154-9691-fa333ba6ad0d-kube-api-access-wn7gd\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.476238 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.490251 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wnsq8" event={"ID":"9b3dba79-45f7-4154-9691-fa333ba6ad0d","Type":"ContainerDied","Data":"a3118b0da5de11a281c834601ee472fce42e89b13a7f308dbb3bfacc88e63820"} Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.490301 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3118b0da5de11a281c834601ee472fce42e89b13a7f308dbb3bfacc88e63820" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.490373 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.492006 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84bcb6db96-7gd85" event={"ID":"ae33121e-ffd0-48c2-b440-384ae5683dce","Type":"ContainerStarted","Data":"5f15170ff1c2f6f4331cbd8e4648d3f7797a20b7aa219e98709e05947c3ec3da"} Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.839253 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-59f545954f-l958x"] Mar 07 08:12:34 crc kubenswrapper[4761]: E0307 08:12:34.840181 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b3dba79-45f7-4154-9691-fa333ba6ad0d" containerName="barbican-db-sync" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.840210 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b3dba79-45f7-4154-9691-fa333ba6ad0d" containerName="barbican-db-sync" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.840752 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b3dba79-45f7-4154-9691-fa333ba6ad0d" containerName="barbican-db-sync" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.842234 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.854415 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7c8db699f6-9j9k4"] Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.859023 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.879451 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.887612 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t549\" (UniqueName: \"kubernetes.io/projected/7d4575c8-a02a-4eb3-9a4c-be82914374f7-kube-api-access-9t549\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.887671 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-config-data-custom\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.887794 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d4575c8-a02a-4eb3-9a4c-be82914374f7-logs\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.887836 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-combined-ca-bundle\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.887962 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-config-data\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.897666 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pfhb5" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.897920 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.901572 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.943683 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59f545954f-l958x"] Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011249 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-config-data\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011319 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t549\" (UniqueName: \"kubernetes.io/projected/7d4575c8-a02a-4eb3-9a4c-be82914374f7-kube-api-access-9t549\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011368 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-config-data-custom\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011415 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d8cp\" (UniqueName: \"kubernetes.io/projected/04f251ce-e592-4a42-a918-314ea2722d03-kube-api-access-6d8cp\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011436 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-config-data\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011466 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-combined-ca-bundle\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011494 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f251ce-e592-4a42-a918-314ea2722d03-logs\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011535 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-config-data-custom\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011582 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d4575c8-a02a-4eb3-9a4c-be82914374f7-logs\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011629 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-combined-ca-bundle\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.048111 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d4575c8-a02a-4eb3-9a4c-be82914374f7-logs\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.071556 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-config-data-custom\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.072709 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c8db699f6-9j9k4"] Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.104810 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t549\" (UniqueName: \"kubernetes.io/projected/7d4575c8-a02a-4eb3-9a4c-be82914374f7-kube-api-access-9t549\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.179978 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-config-data\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.182341 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d8cp\" (UniqueName: \"kubernetes.io/projected/04f251ce-e592-4a42-a918-314ea2722d03-kube-api-access-6d8cp\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.182391 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-config-data\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.182420 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-combined-ca-bundle\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.182456 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f251ce-e592-4a42-a918-314ea2722d03-logs\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.182511 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-config-data-custom\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.190356 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-config-data\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.197035 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f251ce-e592-4a42-a918-314ea2722d03-logs\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.197938 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-combined-ca-bundle\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.213695 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.231944 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-82z7q"] Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.233666 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.233683 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-combined-ca-bundle\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.238570 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d8cp\" (UniqueName: \"kubernetes.io/projected/04f251ce-e592-4a42-a918-314ea2722d03-kube-api-access-6d8cp\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.242469 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-config-data-custom\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.242984 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.254288 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-82z7q"] Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.388836 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-78b5ffc596-hnhkw"] Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.390931 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.395049 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.416007 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.416112 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpblx\" (UniqueName: \"kubernetes.io/projected/372f361d-256a-4a5b-a95d-4f3ff68e5827-kube-api-access-vpblx\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.416175 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.416280 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.416331 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-config\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.416382 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-svc\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.466673 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78b5ffc596-hnhkw"] Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.527692 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.527760 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-combined-ca-bundle\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.527822 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20073497-107b-4d6a-9210-121d5fc67d7f-logs\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.527876 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.527910 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.527941 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbff8\" (UniqueName: \"kubernetes.io/projected/20073497-107b-4d6a-9210-121d5fc67d7f-kube-api-access-dbff8\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.527967 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-config\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.528005 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-svc\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.528023 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.528074 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data-custom\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.528097 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpblx\" (UniqueName: \"kubernetes.io/projected/372f361d-256a-4a5b-a95d-4f3ff68e5827-kube-api-access-vpblx\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.530666 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.531228 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-config\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.531766 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-svc\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.531935 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.532296 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.572994 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpblx\" (UniqueName: \"kubernetes.io/projected/372f361d-256a-4a5b-a95d-4f3ff68e5827-kube-api-access-vpblx\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.607143 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84bcb6db96-7gd85" event={"ID":"ae33121e-ffd0-48c2-b440-384ae5683dce","Type":"ContainerStarted","Data":"623cf21a8ba1f77bfd1d15ddc677d1af04fe25ea2480aa334702d9bbe7c26459"} Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.633184 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data-custom\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.633301 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-combined-ca-bundle\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.633385 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20073497-107b-4d6a-9210-121d5fc67d7f-logs\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.633418 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.633477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbff8\" (UniqueName: \"kubernetes.io/projected/20073497-107b-4d6a-9210-121d5fc67d7f-kube-api-access-dbff8\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.634698 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20073497-107b-4d6a-9210-121d5fc67d7f-logs\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.640515 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.648521 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-combined-ca-bundle\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.662015 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data-custom\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.667681 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.681965 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbff8\" (UniqueName: \"kubernetes.io/projected/20073497-107b-4d6a-9210-121d5fc67d7f-kube-api-access-dbff8\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.738561 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.056047 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59f545954f-l958x"] Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.409227 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c8db699f6-9j9k4"] Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.499329 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-82z7q"] Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.617984 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78b5ffc596-hnhkw"] Mar 07 08:12:36 crc kubenswrapper[4761]: W0307 08:12:36.632105 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20073497_107b_4d6a_9210_121d5fc67d7f.slice/crio-fb94c0db206c71eafed7d48986873362a51c16d0b640332f8e5ab0454d7c9925 WatchSource:0}: Error finding container fb94c0db206c71eafed7d48986873362a51c16d0b640332f8e5ab0454d7c9925: Status 404 returned error can't find the container with id fb94c0db206c71eafed7d48986873362a51c16d0b640332f8e5ab0454d7c9925 Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.664624 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84bcb6db96-7gd85" event={"ID":"ae33121e-ffd0-48c2-b440-384ae5683dce","Type":"ContainerStarted","Data":"b2dc0983f24e3488e088014e469df025ad02e93b99d618a44b109be37fac6455"} Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.666412 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.666462 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.681148 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" event={"ID":"04f251ce-e592-4a42-a918-314ea2722d03","Type":"ContainerStarted","Data":"bb8f564884a9a0beb47d315672c02f2c39e4d2975d27fa47ab34a04be6d88695"} Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.715041 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" event={"ID":"372f361d-256a-4a5b-a95d-4f3ff68e5827","Type":"ContainerStarted","Data":"760377cf8d09eb0d05b2158590bcd80bfc092a167d90ae20615499b39b777451"} Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.719492 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-84bcb6db96-7gd85" podStartSLOduration=3.719467618 podStartE2EDuration="3.719467618s" podCreationTimestamp="2026-03-07 08:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:36.699690648 +0000 UTC m=+1413.608857143" watchObservedRunningTime="2026-03-07 08:12:36.719467618 +0000 UTC m=+1413.628634093" Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.736959 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59f545954f-l958x" event={"ID":"7d4575c8-a02a-4eb3-9a4c-be82914374f7","Type":"ContainerStarted","Data":"12718d2c96e83974b3bc36d8092a8ce91e8f0a9c2058d89416a5c745202cbb70"} Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.821765 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b5ffc596-hnhkw" event={"ID":"20073497-107b-4d6a-9210-121d5fc67d7f","Type":"ContainerStarted","Data":"e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1"} Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.822298 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b5ffc596-hnhkw" event={"ID":"20073497-107b-4d6a-9210-121d5fc67d7f","Type":"ContainerStarted","Data":"3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab"} Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.822323 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b5ffc596-hnhkw" event={"ID":"20073497-107b-4d6a-9210-121d5fc67d7f","Type":"ContainerStarted","Data":"fb94c0db206c71eafed7d48986873362a51c16d0b640332f8e5ab0454d7c9925"} Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.822357 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.822381 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.823856 4761 generic.go:334] "Generic (PLEG): container finished" podID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerID="f517bc7fea3e3513f659bab2cf2fa980f3544430eb492d4d903992e483bbae35" exitCode=0 Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.826534 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" event={"ID":"372f361d-256a-4a5b-a95d-4f3ff68e5827","Type":"ContainerDied","Data":"f517bc7fea3e3513f659bab2cf2fa980f3544430eb492d4d903992e483bbae35"} Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.893120 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-78b5ffc596-hnhkw" podStartSLOduration=2.8930725600000002 podStartE2EDuration="2.89307256s" podCreationTimestamp="2026-03-07 08:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:37.853935722 +0000 UTC m=+1414.763102207" watchObservedRunningTime="2026-03-07 08:12:37.89307256 +0000 UTC m=+1414.802239035" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.554705 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5ccfb69fc8-m454z"] Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.557129 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.562771 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.563150 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.570140 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5ccfb69fc8-m454z"] Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.637296 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43376e1e-1806-4f20-a05f-fe74fee5d843-logs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.637378 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-config-data\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.637444 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-public-tls-certs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.637522 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-config-data-custom\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.637589 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tzxk\" (UniqueName: \"kubernetes.io/projected/43376e1e-1806-4f20-a05f-fe74fee5d843-kube-api-access-6tzxk\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.637638 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-combined-ca-bundle\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.637694 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-internal-tls-certs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.739497 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tzxk\" (UniqueName: \"kubernetes.io/projected/43376e1e-1806-4f20-a05f-fe74fee5d843-kube-api-access-6tzxk\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.739608 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-combined-ca-bundle\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.739675 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-internal-tls-certs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.739818 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43376e1e-1806-4f20-a05f-fe74fee5d843-logs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.739869 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-config-data\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.739924 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-public-tls-certs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.740283 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43376e1e-1806-4f20-a05f-fe74fee5d843-logs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.741684 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-config-data-custom\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.748683 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-config-data-custom\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.753144 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-public-tls-certs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.754524 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-config-data\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.760832 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tzxk\" (UniqueName: \"kubernetes.io/projected/43376e1e-1806-4f20-a05f-fe74fee5d843-kube-api-access-6tzxk\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.761582 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-combined-ca-bundle\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.767375 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-internal-tls-certs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.937579 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:40 crc kubenswrapper[4761]: I0307 08:12:40.818824 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5ccfb69fc8-m454z"] Mar 07 08:12:40 crc kubenswrapper[4761]: I0307 08:12:40.866601 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:40 crc kubenswrapper[4761]: I0307 08:12:40.869263 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ccfb69fc8-m454z" event={"ID":"43376e1e-1806-4f20-a05f-fe74fee5d843","Type":"ContainerStarted","Data":"9fb0ad85064d364d8a7aa9dea33416f14c1ebfea7048d10b88b15fd201327859"} Mar 07 08:12:40 crc kubenswrapper[4761]: I0307 08:12:40.898416 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" podStartSLOduration=5.898395143 podStartE2EDuration="5.898395143s" podCreationTimestamp="2026-03-07 08:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:40.890242378 +0000 UTC m=+1417.799408853" watchObservedRunningTime="2026-03-07 08:12:40.898395143 +0000 UTC m=+1417.807561628" Mar 07 08:12:41 crc kubenswrapper[4761]: I0307 08:12:41.885488 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ccfb69fc8-m454z" event={"ID":"43376e1e-1806-4f20-a05f-fe74fee5d843","Type":"ContainerStarted","Data":"f5bd59d32aee4c90ac66ec5ff3dd26cb7abf9702e752d5e20b318ea2b59d9fc2"} Mar 07 08:12:41 crc kubenswrapper[4761]: I0307 08:12:41.888554 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59f545954f-l958x" event={"ID":"7d4575c8-a02a-4eb3-9a4c-be82914374f7","Type":"ContainerStarted","Data":"ce2cb83aa6782fc9d2f0c735b1d86bc175e5814d50dbf50aefd8b9b25fd38015"} Mar 07 08:12:41 crc kubenswrapper[4761]: I0307 08:12:41.890903 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" event={"ID":"04f251ce-e592-4a42-a918-314ea2722d03","Type":"ContainerStarted","Data":"eba7190ec21d086fe498feb8a60364a7f843d4288031c4ba594dde1a6eaf9c4d"} Mar 07 08:12:41 crc kubenswrapper[4761]: I0307 08:12:41.890950 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" event={"ID":"04f251ce-e592-4a42-a918-314ea2722d03","Type":"ContainerStarted","Data":"133384ef5d32122b1307b4606d2e106d86593c5a32803a687c2b14f832694f9c"} Mar 07 08:12:41 crc kubenswrapper[4761]: I0307 08:12:41.895185 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" event={"ID":"372f361d-256a-4a5b-a95d-4f3ff68e5827","Type":"ContainerStarted","Data":"65bf03db04217df982e40e805335492498ff93bc011d82a0a0d18fa7cece75fb"} Mar 07 08:12:41 crc kubenswrapper[4761]: I0307 08:12:41.919774 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" podStartSLOduration=3.9880902689999997 podStartE2EDuration="7.919756113s" podCreationTimestamp="2026-03-07 08:12:34 +0000 UTC" firstStartedPulling="2026-03-07 08:12:36.438158258 +0000 UTC m=+1413.347324733" lastFinishedPulling="2026-03-07 08:12:40.369824102 +0000 UTC m=+1417.278990577" observedRunningTime="2026-03-07 08:12:41.909250848 +0000 UTC m=+1418.818417333" watchObservedRunningTime="2026-03-07 08:12:41.919756113 +0000 UTC m=+1418.828922588" Mar 07 08:12:43 crc kubenswrapper[4761]: I0307 08:12:43.927455 4761 generic.go:334] "Generic (PLEG): container finished" podID="dce2c706-6c24-4be8-b347-90448de8aaf9" containerID="560fe328c871c1fd36e317523f8415d6e1437c8d786e81f4b10c902c8f0a9573" exitCode=0 Mar 07 08:12:43 crc kubenswrapper[4761]: I0307 08:12:43.927817 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-92qzx" event={"ID":"dce2c706-6c24-4be8-b347-90448de8aaf9","Type":"ContainerDied","Data":"560fe328c871c1fd36e317523f8415d6e1437c8d786e81f4b10c902c8f0a9573"} Mar 07 08:12:43 crc kubenswrapper[4761]: I0307 08:12:43.934039 4761 generic.go:334] "Generic (PLEG): container finished" podID="782631b9-e01d-424c-af31-3471bfdf1587" containerID="a4cceda235cdb340157db8083fb5a763bc0408a1d5edeb08189f027c6a110169" exitCode=0 Mar 07 08:12:43 crc kubenswrapper[4761]: I0307 08:12:43.934084 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d9psc" event={"ID":"782631b9-e01d-424c-af31-3471bfdf1587","Type":"ContainerDied","Data":"a4cceda235cdb340157db8083fb5a763bc0408a1d5edeb08189f027c6a110169"} Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.071993 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.300798 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q5jjc"] Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.303447 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.311968 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q5jjc"] Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.335593 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tblc8\" (UniqueName: \"kubernetes.io/projected/d2217e77-ce96-4ec3-9759-79f03958dc9c-kube-api-access-tblc8\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.335795 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-utilities\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.336035 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-catalog-content\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.436907 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-utilities\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.436982 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-catalog-content\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.437096 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tblc8\" (UniqueName: \"kubernetes.io/projected/d2217e77-ce96-4ec3-9759-79f03958dc9c-kube-api-access-tblc8\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.437773 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-catalog-content\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.437820 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-utilities\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.446726 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-795c9dd6fc-kqgf4"] Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.446978 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-795c9dd6fc-kqgf4" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-api" containerID="cri-o://7bda07256ee2627429245d18d5649b3657d12f6cebca25a531829eea2aa0e074" gracePeriod=30 Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.447287 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-795c9dd6fc-kqgf4" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-httpd" containerID="cri-o://f0f3124d8f6910b941dc6607e892a98ae3067b4ce30a70b32703105114946abc" gracePeriod=30 Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.473457 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tblc8\" (UniqueName: \"kubernetes.io/projected/d2217e77-ce96-4ec3-9759-79f03958dc9c-kube-api-access-tblc8\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.526774 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-69d7d999d5-z6jzw"] Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.528679 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.529882 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-795c9dd6fc-kqgf4" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.201:9696/\": read tcp 10.217.0.2:47112->10.217.0.201:9696: read: connection reset by peer" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.570776 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-69d7d999d5-z6jzw"] Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.630543 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.645162 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-ovndb-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.645259 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-public-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.645292 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-combined-ca-bundle\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.645372 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h25t\" (UniqueName: \"kubernetes.io/projected/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-kube-api-access-5h25t\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.645393 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-internal-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.645432 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-config\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.645458 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-httpd-config\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.747503 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-ovndb-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.747629 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-public-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.747677 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-combined-ca-bundle\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.747807 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h25t\" (UniqueName: \"kubernetes.io/projected/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-kube-api-access-5h25t\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.747846 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-internal-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.747893 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-config\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.747931 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-httpd-config\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.754211 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-combined-ca-bundle\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.754290 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-internal-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.756690 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-ovndb-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.756769 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-httpd-config\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.757232 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-public-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.759450 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-config\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.793832 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h25t\" (UniqueName: \"kubernetes.io/projected/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-kube-api-access-5h25t\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.883797 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:45 crc kubenswrapper[4761]: I0307 08:12:45.644948 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:45 crc kubenswrapper[4761]: I0307 08:12:45.747979 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2dmg9"] Mar 07 08:12:45 crc kubenswrapper[4761]: I0307 08:12:45.748270 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerName="dnsmasq-dns" containerID="cri-o://8770483a09bf6a7b3c50c01184e37a888d5d93e3afa587afe6190ed3256c62ff" gracePeriod=10 Mar 07 08:12:45 crc kubenswrapper[4761]: I0307 08:12:45.986287 4761 generic.go:334] "Generic (PLEG): container finished" podID="e27c72db-fb0c-4db5-965c-2f859f151114" containerID="f0f3124d8f6910b941dc6607e892a98ae3067b4ce30a70b32703105114946abc" exitCode=0 Mar 07 08:12:45 crc kubenswrapper[4761]: I0307 08:12:45.986608 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795c9dd6fc-kqgf4" event={"ID":"e27c72db-fb0c-4db5-965c-2f859f151114","Type":"ContainerDied","Data":"f0f3124d8f6910b941dc6607e892a98ae3067b4ce30a70b32703105114946abc"} Mar 07 08:12:45 crc kubenswrapper[4761]: I0307 08:12:45.989216 4761 generic.go:334] "Generic (PLEG): container finished" podID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerID="8770483a09bf6a7b3c50c01184e37a888d5d93e3afa587afe6190ed3256c62ff" exitCode=0 Mar 07 08:12:45 crc kubenswrapper[4761]: I0307 08:12:45.989240 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" event={"ID":"1feced41-f55d-41bf-a1fb-3c49a768ea5b","Type":"ContainerDied","Data":"8770483a09bf6a7b3c50c01184e37a888d5d93e3afa587afe6190ed3256c62ff"} Mar 07 08:12:46 crc kubenswrapper[4761]: I0307 08:12:46.973311 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-795c9dd6fc-kqgf4" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.201:9696/\": dial tcp 10.217.0.201:9696: connect: connection refused" Mar 07 08:12:47 crc kubenswrapper[4761]: I0307 08:12:47.001100 4761 generic.go:334] "Generic (PLEG): container finished" podID="e27c72db-fb0c-4db5-965c-2f859f151114" containerID="7bda07256ee2627429245d18d5649b3657d12f6cebca25a531829eea2aa0e074" exitCode=0 Mar 07 08:12:47 crc kubenswrapper[4761]: I0307 08:12:47.001161 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795c9dd6fc-kqgf4" event={"ID":"e27c72db-fb0c-4db5-965c-2f859f151114","Type":"ContainerDied","Data":"7bda07256ee2627429245d18d5649b3657d12f6cebca25a531829eea2aa0e074"} Mar 07 08:12:47 crc kubenswrapper[4761]: I0307 08:12:47.559644 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:47 crc kubenswrapper[4761]: I0307 08:12:47.668040 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.040431 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59f545954f-l958x" event={"ID":"7d4575c8-a02a-4eb3-9a4c-be82914374f7","Type":"ContainerStarted","Data":"797c2dc05795c6001535fa27b64c70fc02b56105df5506484602790785ab8d85"} Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.075056 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-59f545954f-l958x" podStartSLOduration=9.736849776 podStartE2EDuration="14.075039731s" podCreationTimestamp="2026-03-07 08:12:34 +0000 UTC" firstStartedPulling="2026-03-07 08:12:36.03095798 +0000 UTC m=+1412.940124455" lastFinishedPulling="2026-03-07 08:12:40.369147935 +0000 UTC m=+1417.278314410" observedRunningTime="2026-03-07 08:12:48.066547867 +0000 UTC m=+1424.975714342" watchObservedRunningTime="2026-03-07 08:12:48.075039731 +0000 UTC m=+1424.984206206" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.788381 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d9psc" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.793686 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-92qzx" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863519 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxs4p\" (UniqueName: \"kubernetes.io/projected/782631b9-e01d-424c-af31-3471bfdf1587-kube-api-access-hxs4p\") pod \"782631b9-e01d-424c-af31-3471bfdf1587\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863592 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-combined-ca-bundle\") pod \"782631b9-e01d-424c-af31-3471bfdf1587\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863624 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-combined-ca-bundle\") pod \"dce2c706-6c24-4be8-b347-90448de8aaf9\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863664 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-db-sync-config-data\") pod \"782631b9-e01d-424c-af31-3471bfdf1587\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863695 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-scripts\") pod \"782631b9-e01d-424c-af31-3471bfdf1587\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863787 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-config-data\") pod \"782631b9-e01d-424c-af31-3471bfdf1587\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863860 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-config-data\") pod \"dce2c706-6c24-4be8-b347-90448de8aaf9\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863908 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66pdh\" (UniqueName: \"kubernetes.io/projected/dce2c706-6c24-4be8-b347-90448de8aaf9-kube-api-access-66pdh\") pod \"dce2c706-6c24-4be8-b347-90448de8aaf9\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.864041 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/782631b9-e01d-424c-af31-3471bfdf1587-etc-machine-id\") pod \"782631b9-e01d-424c-af31-3471bfdf1587\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.864558 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/782631b9-e01d-424c-af31-3471bfdf1587-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "782631b9-e01d-424c-af31-3471bfdf1587" (UID: "782631b9-e01d-424c-af31-3471bfdf1587"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.881526 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-scripts" (OuterVolumeSpecName: "scripts") pod "782631b9-e01d-424c-af31-3471bfdf1587" (UID: "782631b9-e01d-424c-af31-3471bfdf1587"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.881876 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782631b9-e01d-424c-af31-3471bfdf1587-kube-api-access-hxs4p" (OuterVolumeSpecName: "kube-api-access-hxs4p") pod "782631b9-e01d-424c-af31-3471bfdf1587" (UID: "782631b9-e01d-424c-af31-3471bfdf1587"). InnerVolumeSpecName "kube-api-access-hxs4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.882878 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce2c706-6c24-4be8-b347-90448de8aaf9-kube-api-access-66pdh" (OuterVolumeSpecName: "kube-api-access-66pdh") pod "dce2c706-6c24-4be8-b347-90448de8aaf9" (UID: "dce2c706-6c24-4be8-b347-90448de8aaf9"). InnerVolumeSpecName "kube-api-access-66pdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.886242 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "782631b9-e01d-424c-af31-3471bfdf1587" (UID: "782631b9-e01d-424c-af31-3471bfdf1587"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.928148 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dce2c706-6c24-4be8-b347-90448de8aaf9" (UID: "dce2c706-6c24-4be8-b347-90448de8aaf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.931726 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "782631b9-e01d-424c-af31-3471bfdf1587" (UID: "782631b9-e01d-424c-af31-3471bfdf1587"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.974865 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66pdh\" (UniqueName: \"kubernetes.io/projected/dce2c706-6c24-4be8-b347-90448de8aaf9-kube-api-access-66pdh\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.974899 4761 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/782631b9-e01d-424c-af31-3471bfdf1587-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.974911 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxs4p\" (UniqueName: \"kubernetes.io/projected/782631b9-e01d-424c-af31-3471bfdf1587-kube-api-access-hxs4p\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.974923 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.974938 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.974951 4761 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.974963 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.082085 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-92qzx" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.082103 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-92qzx" event={"ID":"dce2c706-6c24-4be8-b347-90448de8aaf9","Type":"ContainerDied","Data":"852459d3b2b553dabaa3fb65bc625cef07f0159ca47f92b91b195c4c5a7e2463"} Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.082170 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="852459d3b2b553dabaa3fb65bc625cef07f0159ca47f92b91b195c4c5a7e2463" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.084083 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d9psc" event={"ID":"782631b9-e01d-424c-af31-3471bfdf1587","Type":"ContainerDied","Data":"57fe1c0b330204d6c39c8493ef2a297ed02920ab824fcfb73ae311a94daa5c9c"} Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.084140 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d9psc" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.084148 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57fe1c0b330204d6c39c8493ef2a297ed02920ab824fcfb73ae311a94daa5c9c" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.158748 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.176436 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-config-data" (OuterVolumeSpecName: "config-data") pod "782631b9-e01d-424c-af31-3471bfdf1587" (UID: "782631b9-e01d-424c-af31-3471bfdf1587"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.179424 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.280856 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-svc\") pod \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.281303 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-nb\") pod \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.281563 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-sb\") pod \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.281788 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-config\") pod \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.282276 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zs4h\" (UniqueName: \"kubernetes.io/projected/1feced41-f55d-41bf-a1fb-3c49a768ea5b-kube-api-access-5zs4h\") pod \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.282385 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-swift-storage-0\") pod \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.289983 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1feced41-f55d-41bf-a1fb-3c49a768ea5b-kube-api-access-5zs4h" (OuterVolumeSpecName: "kube-api-access-5zs4h") pod "1feced41-f55d-41bf-a1fb-3c49a768ea5b" (UID: "1feced41-f55d-41bf-a1fb-3c49a768ea5b"). InnerVolumeSpecName "kube-api-access-5zs4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.357933 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-config-data" (OuterVolumeSpecName: "config-data") pod "dce2c706-6c24-4be8-b347-90448de8aaf9" (UID: "dce2c706-6c24-4be8-b347-90448de8aaf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.385551 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.385588 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zs4h\" (UniqueName: \"kubernetes.io/projected/1feced41-f55d-41bf-a1fb-3c49a768ea5b-kube-api-access-5zs4h\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.582208 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q5jjc"] Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.589492 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1feced41-f55d-41bf-a1fb-3c49a768ea5b" (UID: "1feced41-f55d-41bf-a1fb-3c49a768ea5b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.590381 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1feced41-f55d-41bf-a1fb-3c49a768ea5b" (UID: "1feced41-f55d-41bf-a1fb-3c49a768ea5b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.607445 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.607482 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.612125 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1feced41-f55d-41bf-a1fb-3c49a768ea5b" (UID: "1feced41-f55d-41bf-a1fb-3c49a768ea5b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.612348 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1feced41-f55d-41bf-a1fb-3c49a768ea5b" (UID: "1feced41-f55d-41bf-a1fb-3c49a768ea5b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.661360 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-config" (OuterVolumeSpecName: "config") pod "1feced41-f55d-41bf-a1fb-3c49a768ea5b" (UID: "1feced41-f55d-41bf-a1fb-3c49a768ea5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.663663 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-69d7d999d5-z6jzw"] Mar 07 08:12:49 crc kubenswrapper[4761]: W0307 08:12:49.664577 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad8d6ecb_2a0a_4ba6_b995_e95ea3c2174d.slice/crio-e4d90585921575c3dd7180565c767123f29febb0ffc745d9215c24a4742293d8 WatchSource:0}: Error finding container e4d90585921575c3dd7180565c767123f29febb0ffc745d9215c24a4742293d8: Status 404 returned error can't find the container with id e4d90585921575c3dd7180565c767123f29febb0ffc745d9215c24a4742293d8 Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.710926 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.711133 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.711294 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.850781 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.021082 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-public-tls-certs\") pod \"e27c72db-fb0c-4db5-965c-2f859f151114\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.022118 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-internal-tls-certs\") pod \"e27c72db-fb0c-4db5-965c-2f859f151114\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.022150 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-httpd-config\") pod \"e27c72db-fb0c-4db5-965c-2f859f151114\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.022227 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-config\") pod \"e27c72db-fb0c-4db5-965c-2f859f151114\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.022274 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-ovndb-tls-certs\") pod \"e27c72db-fb0c-4db5-965c-2f859f151114\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.022293 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-combined-ca-bundle\") pod \"e27c72db-fb0c-4db5-965c-2f859f151114\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.022352 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnvnq\" (UniqueName: \"kubernetes.io/projected/e27c72db-fb0c-4db5-965c-2f859f151114-kube-api-access-jnvnq\") pod \"e27c72db-fb0c-4db5-965c-2f859f151114\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.027554 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27c72db-fb0c-4db5-965c-2f859f151114-kube-api-access-jnvnq" (OuterVolumeSpecName: "kube-api-access-jnvnq") pod "e27c72db-fb0c-4db5-965c-2f859f151114" (UID: "e27c72db-fb0c-4db5-965c-2f859f151114"). InnerVolumeSpecName "kube-api-access-jnvnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.070202 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e27c72db-fb0c-4db5-965c-2f859f151114" (UID: "e27c72db-fb0c-4db5-965c-2f859f151114"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.123257 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e27c72db-fb0c-4db5-965c-2f859f151114" (UID: "e27c72db-fb0c-4db5-965c-2f859f151114"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.125634 4761 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.125663 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnvnq\" (UniqueName: \"kubernetes.io/projected/e27c72db-fb0c-4db5-965c-2f859f151114-kube-api-access-jnvnq\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.125691 4761 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.130398 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:12:50 crc kubenswrapper[4761]: E0307 08:12:50.130906 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782631b9-e01d-424c-af31-3471bfdf1587" containerName="cinder-db-sync" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.130920 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="782631b9-e01d-424c-af31-3471bfdf1587" containerName="cinder-db-sync" Mar 07 08:12:50 crc kubenswrapper[4761]: E0307 08:12:50.130932 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerName="init" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.130938 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerName="init" Mar 07 08:12:50 crc kubenswrapper[4761]: E0307 08:12:50.130955 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-api" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.130960 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-api" Mar 07 08:12:50 crc kubenswrapper[4761]: E0307 08:12:50.130973 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce2c706-6c24-4be8-b347-90448de8aaf9" containerName="heat-db-sync" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.130979 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce2c706-6c24-4be8-b347-90448de8aaf9" containerName="heat-db-sync" Mar 07 08:12:50 crc kubenswrapper[4761]: E0307 08:12:50.130992 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-httpd" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.130998 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-httpd" Mar 07 08:12:50 crc kubenswrapper[4761]: E0307 08:12:50.131021 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerName="dnsmasq-dns" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.131026 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerName="dnsmasq-dns" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.131216 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce2c706-6c24-4be8-b347-90448de8aaf9" containerName="heat-db-sync" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.131231 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-httpd" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.131245 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-api" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.131252 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerName="dnsmasq-dns" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.131262 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="782631b9-e01d-424c-af31-3471bfdf1587" containerName="cinder-db-sync" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.135703 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.145316 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.145591 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.145950 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pnxzw" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.146542 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.159298 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" event={"ID":"1feced41-f55d-41bf-a1fb-3c49a768ea5b","Type":"ContainerDied","Data":"9e9adff463c65d7c6bb0ccc48d5be6576530813a03c5a123454224aeb14c06bf"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.159337 4761 scope.go:117] "RemoveContainer" containerID="8770483a09bf6a7b3c50c01184e37a888d5d93e3afa587afe6190ed3256c62ff" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.159494 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.175198 4761 generic.go:334] "Generic (PLEG): container finished" podID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerID="defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593" exitCode=0 Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.175261 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5jjc" event={"ID":"d2217e77-ce96-4ec3-9759-79f03958dc9c","Type":"ContainerDied","Data":"defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.175285 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5jjc" event={"ID":"d2217e77-ce96-4ec3-9759-79f03958dc9c","Type":"ContainerStarted","Data":"bf4ea89029ab40970ab415d2d085585802656f14ef4bd9a850650491e936c122"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.182273 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ccfb69fc8-m454z" event={"ID":"43376e1e-1806-4f20-a05f-fe74fee5d843","Type":"ContainerStarted","Data":"b2d85698b5d87bda279f7a37173da6aeef1eb758d5df43a89144210a8fce2b9c"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.185307 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.186022 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.186902 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.202234 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e27c72db-fb0c-4db5-965c-2f859f151114" (UID: "e27c72db-fb0c-4db5-965c-2f859f151114"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.225025 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cxtbf"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.227025 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.227310 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.227391 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-scripts\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.227418 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.227472 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9b8l\" (UniqueName: \"kubernetes.io/projected/347f09d3-6f9f-4eb1-a655-02e6af151d29-kube-api-access-z9b8l\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.227498 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/347f09d3-6f9f-4eb1-a655-02e6af151d29-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.243887 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.244080 4761 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.245437 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerStarted","Data":"599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.247254 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.254173 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-config" (OuterVolumeSpecName: "config") pod "e27c72db-fb0c-4db5-965c-2f859f151114" (UID: "e27c72db-fb0c-4db5-965c-2f859f151114"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.259709 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="proxy-httpd" containerID="cri-o://599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261" gracePeriod=30 Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.259915 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-central-agent" containerID="cri-o://43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f" gracePeriod=30 Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.259997 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="sg-core" containerID="cri-o://0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294" gracePeriod=30 Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.260034 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-notification-agent" containerID="cri-o://d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b" gracePeriod=30 Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.298950 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.299571 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cxtbf"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.299605 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795c9dd6fc-kqgf4" event={"ID":"e27c72db-fb0c-4db5-965c-2f859f151114","Type":"ContainerDied","Data":"b49dc3f330e56a74796d2982a561612c7903bc1e0336d6b59fb45d5b704fb1e9"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.305013 4761 scope.go:117] "RemoveContainer" containerID="0d093c0692b1a14616aa39efb24254b44c88f721100e0fd4189d8017719b5052" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.312041 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69d7d999d5-z6jzw" event={"ID":"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d","Type":"ContainerStarted","Data":"358a6c602188062bab8b056c47a0b4874c321cb737b926d76db0a1ab9f293d3a"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.312092 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69d7d999d5-z6jzw" event={"ID":"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d","Type":"ContainerStarted","Data":"e4d90585921575c3dd7180565c767123f29febb0ffc745d9215c24a4742293d8"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.324775 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e27c72db-fb0c-4db5-965c-2f859f151114" (UID: "e27c72db-fb0c-4db5-965c-2f859f151114"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346442 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346528 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346573 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346633 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346656 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346706 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pl2s\" (UniqueName: \"kubernetes.io/projected/47de323f-ec4f-408e-ab84-7795676044fe-kube-api-access-4pl2s\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346755 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346790 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-config\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346838 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-scripts\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346859 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346924 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9b8l\" (UniqueName: \"kubernetes.io/projected/347f09d3-6f9f-4eb1-a655-02e6af151d29-kube-api-access-z9b8l\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346947 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/347f09d3-6f9f-4eb1-a655-02e6af151d29-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.347001 4761 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.347015 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.347058 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/347f09d3-6f9f-4eb1-a655-02e6af151d29-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.355272 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-scripts\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.355692 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.358136 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e27c72db-fb0c-4db5-965c-2f859f151114" (UID: "e27c72db-fb0c-4db5-965c-2f859f151114"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.375684 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.376143 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.382778 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2dmg9"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.388529 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9b8l\" (UniqueName: \"kubernetes.io/projected/347f09d3-6f9f-4eb1-a655-02e6af151d29-kube-api-access-z9b8l\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.391786 4761 scope.go:117] "RemoveContainer" containerID="f0f3124d8f6910b941dc6607e892a98ae3067b4ce30a70b32703105114946abc" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.411771 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2dmg9"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.438951 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5ccfb69fc8-m454z" podStartSLOduration=12.438930915 podStartE2EDuration="12.438930915s" podCreationTimestamp="2026-03-07 08:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:50.26638015 +0000 UTC m=+1427.175546625" watchObservedRunningTime="2026-03-07 08:12:50.438930915 +0000 UTC m=+1427.348097390" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.452040 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pl2s\" (UniqueName: \"kubernetes.io/projected/47de323f-ec4f-408e-ab84-7795676044fe-kube-api-access-4pl2s\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.452119 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-config\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.452238 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.452286 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.452327 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.452350 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.452428 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.453361 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.454089 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-config\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.454666 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.455204 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.455874 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.462882 4761 scope.go:117] "RemoveContainer" containerID="7bda07256ee2627429245d18d5649b3657d12f6cebca25a531829eea2aa0e074" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.479311 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.482406 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pl2s\" (UniqueName: \"kubernetes.io/projected/47de323f-ec4f-408e-ab84-7795676044fe-kube-api-access-4pl2s\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.535327 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.809290283 podStartE2EDuration="1m14.535308108s" podCreationTimestamp="2026-03-07 08:11:36 +0000 UTC" firstStartedPulling="2026-03-07 08:11:37.974858382 +0000 UTC m=+1354.884024857" lastFinishedPulling="2026-03-07 08:12:48.700876207 +0000 UTC m=+1425.610042682" observedRunningTime="2026-03-07 08:12:50.321305116 +0000 UTC m=+1427.230471601" watchObservedRunningTime="2026-03-07 08:12:50.535308108 +0000 UTC m=+1427.444474583" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.570772 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.572574 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.583540 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.587496 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.588599 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.671468 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/785bd50e-a249-4021-83b3-ff8e33c343db-logs\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.671556 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/785bd50e-a249-4021-83b3-ff8e33c343db-etc-machine-id\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.671582 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.671660 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.671773 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tljw8\" (UniqueName: \"kubernetes.io/projected/785bd50e-a249-4021-83b3-ff8e33c343db-kube-api-access-tljw8\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.671927 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data-custom\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.671971 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.686771 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-795c9dd6fc-kqgf4"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.705288 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-795c9dd6fc-kqgf4"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.773828 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data-custom\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.773892 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.774030 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/785bd50e-a249-4021-83b3-ff8e33c343db-logs\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.774067 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/785bd50e-a249-4021-83b3-ff8e33c343db-etc-machine-id\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.774089 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.774137 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.774192 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tljw8\" (UniqueName: \"kubernetes.io/projected/785bd50e-a249-4021-83b3-ff8e33c343db-kube-api-access-tljw8\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.774870 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/785bd50e-a249-4021-83b3-ff8e33c343db-etc-machine-id\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.775796 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/785bd50e-a249-4021-83b3-ff8e33c343db-logs\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.785409 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data-custom\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.785429 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.785579 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.789136 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.798378 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tljw8\" (UniqueName: \"kubernetes.io/projected/785bd50e-a249-4021-83b3-ff8e33c343db-kube-api-access-tljw8\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.910920 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.343480 4761 generic.go:334] "Generic (PLEG): container finished" podID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerID="599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261" exitCode=0 Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.344129 4761 generic.go:334] "Generic (PLEG): container finished" podID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerID="0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294" exitCode=2 Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.344143 4761 generic.go:334] "Generic (PLEG): container finished" podID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerID="43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f" exitCode=0 Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.344026 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerDied","Data":"599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261"} Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.344240 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerDied","Data":"0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294"} Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.344255 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerDied","Data":"43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f"} Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.371293 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69d7d999d5-z6jzw" event={"ID":"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d","Type":"ContainerStarted","Data":"a90f9a5f994ead2335538759585d4a1cf87ad53f41ea24261cee7cfd01cda76b"} Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.373008 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.405756 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.429796 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-69d7d999d5-z6jzw" podStartSLOduration=7.429774514 podStartE2EDuration="7.429774514s" podCreationTimestamp="2026-03-07 08:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:51.390658157 +0000 UTC m=+1428.299824632" watchObservedRunningTime="2026-03-07 08:12:51.429774514 +0000 UTC m=+1428.338940989" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.566557 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z8dct"] Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.568867 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.594158 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cxtbf"] Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.620836 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8dct"] Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.721406 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-catalog-content\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.722164 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-utilities\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.722262 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc6st\" (UniqueName: \"kubernetes.io/projected/3ad49ed9-8c84-4de1-830c-679262fc906d-kube-api-access-kc6st\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.738798 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" path="/var/lib/kubelet/pods/1feced41-f55d-41bf-a1fb-3c49a768ea5b/volumes" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.742970 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" path="/var/lib/kubelet/pods/e27c72db-fb0c-4db5-965c-2f859f151114/volumes" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.827830 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-catalog-content\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.828081 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-utilities\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.828291 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc6st\" (UniqueName: \"kubernetes.io/projected/3ad49ed9-8c84-4de1-830c-679262fc906d-kube-api-access-kc6st\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.829400 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-catalog-content\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.829782 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-utilities\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.851625 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.857282 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc6st\" (UniqueName: \"kubernetes.io/projected/3ad49ed9-8c84-4de1-830c-679262fc906d-kube-api-access-kc6st\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.050486 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.140121 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-px52h"] Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.144486 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.157554 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-px52h"] Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.253010 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-utilities\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.253206 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-catalog-content\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.253226 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n42w2\" (UniqueName: \"kubernetes.io/projected/321917f1-f061-4e00-a598-2766772d2290-kube-api-access-n42w2\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.356869 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-utilities\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.357017 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-catalog-content\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.357045 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n42w2\" (UniqueName: \"kubernetes.io/projected/321917f1-f061-4e00-a598-2766772d2290-kube-api-access-n42w2\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.371040 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-utilities\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.374154 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-catalog-content\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.387380 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n42w2\" (UniqueName: \"kubernetes.io/projected/321917f1-f061-4e00-a598-2766772d2290-kube-api-access-n42w2\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.430526 4761 generic.go:334] "Generic (PLEG): container finished" podID="47de323f-ec4f-408e-ab84-7795676044fe" containerID="db76a4b10bb0a626ef23cd3081e3ec0c08ddcae40fa11f11d1e45f6d1d2e63e8" exitCode=0 Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.430598 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" event={"ID":"47de323f-ec4f-408e-ab84-7795676044fe","Type":"ContainerDied","Data":"db76a4b10bb0a626ef23cd3081e3ec0c08ddcae40fa11f11d1e45f6d1d2e63e8"} Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.430626 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" event={"ID":"47de323f-ec4f-408e-ab84-7795676044fe","Type":"ContainerStarted","Data":"69a69b2cb8492a4adf3759da20e907916ffd475dadc126a6233b6ca253538ef7"} Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.470073 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"347f09d3-6f9f-4eb1-a655-02e6af151d29","Type":"ContainerStarted","Data":"9ebd8fc7adeeaf09d3d06b2d65d582f2062424f6e484b1bb1b5c97a9e8444be2"} Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.511022 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"785bd50e-a249-4021-83b3-ff8e33c343db","Type":"ContainerStarted","Data":"6b0131ba18120a6f4dbc7831768e1497b640227ba3fde87bd285ea63104f7db7"} Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.557282 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.557942 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5jjc" event={"ID":"d2217e77-ce96-4ec3-9759-79f03958dc9c","Type":"ContainerStarted","Data":"28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd"} Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.586195 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.614639 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:53 crc kubenswrapper[4761]: I0307 08:12:53.101031 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8dct"] Mar 07 08:12:53 crc kubenswrapper[4761]: I0307 08:12:53.591521 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8dct" event={"ID":"3ad49ed9-8c84-4de1-830c-679262fc906d","Type":"ContainerStarted","Data":"090a72409729fe7daeed2197536fbaddaa5293f4efc5c41aa0af78a61f93da8c"} Mar 07 08:12:53 crc kubenswrapper[4761]: I0307 08:12:53.603948 4761 generic.go:334] "Generic (PLEG): container finished" podID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerID="28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd" exitCode=0 Mar 07 08:12:53 crc kubenswrapper[4761]: I0307 08:12:53.604567 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5jjc" event={"ID":"d2217e77-ce96-4ec3-9759-79f03958dc9c","Type":"ContainerDied","Data":"28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd"} Mar 07 08:12:53 crc kubenswrapper[4761]: I0307 08:12:53.960816 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-px52h"] Mar 07 08:12:53 crc kubenswrapper[4761]: I0307 08:12:53.973530 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.198:5353: i/o timeout" Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.507704 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.636647 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"785bd50e-a249-4021-83b3-ff8e33c343db","Type":"ContainerStarted","Data":"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67"} Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.652098 4761 generic.go:334] "Generic (PLEG): container finished" podID="321917f1-f061-4e00-a598-2766772d2290" containerID="5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069" exitCode=0 Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.652253 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px52h" event={"ID":"321917f1-f061-4e00-a598-2766772d2290","Type":"ContainerDied","Data":"5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069"} Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.652302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px52h" event={"ID":"321917f1-f061-4e00-a598-2766772d2290","Type":"ContainerStarted","Data":"2cbfd1b3af208babb0d08bf03360a9cd1efcb6c980322092c1b709cbeae0d45d"} Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.677063 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5jjc" event={"ID":"d2217e77-ce96-4ec3-9759-79f03958dc9c","Type":"ContainerStarted","Data":"3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964"} Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.686946 4761 generic.go:334] "Generic (PLEG): container finished" podID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerID="82295a8be5f6343bca3c9c0785b56f687bd5b59561b60a8b69c2f6c1d2003d94" exitCode=0 Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.687999 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8dct" event={"ID":"3ad49ed9-8c84-4de1-830c-679262fc906d","Type":"ContainerDied","Data":"82295a8be5f6343bca3c9c0785b56f687bd5b59561b60a8b69c2f6c1d2003d94"} Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.694504 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" event={"ID":"47de323f-ec4f-408e-ab84-7795676044fe","Type":"ContainerStarted","Data":"2f178f2514e878b04f9c28ff9d6c8b7cb650cbc72f8af917f0ccc2484220920b"} Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.695362 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.710550 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q5jjc" podStartSLOduration=6.742093507 podStartE2EDuration="10.71053389s" podCreationTimestamp="2026-03-07 08:12:44 +0000 UTC" firstStartedPulling="2026-03-07 08:12:50.206793686 +0000 UTC m=+1427.115960161" lastFinishedPulling="2026-03-07 08:12:54.175234069 +0000 UTC m=+1431.084400544" observedRunningTime="2026-03-07 08:12:54.702071236 +0000 UTC m=+1431.611237711" watchObservedRunningTime="2026-03-07 08:12:54.71053389 +0000 UTC m=+1431.619700365" Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.764916 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" podStartSLOduration=4.764897682 podStartE2EDuration="4.764897682s" podCreationTimestamp="2026-03-07 08:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:54.754831388 +0000 UTC m=+1431.663997863" watchObservedRunningTime="2026-03-07 08:12:54.764897682 +0000 UTC m=+1431.674064147" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.461146 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.586642 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-sg-core-conf-yaml\") pod \"ff736eba-5e3e-4608-8f3f-13783efb0735\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.587019 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-run-httpd\") pod \"ff736eba-5e3e-4608-8f3f-13783efb0735\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.587072 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-log-httpd\") pod \"ff736eba-5e3e-4608-8f3f-13783efb0735\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.587154 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lctwn\" (UniqueName: \"kubernetes.io/projected/ff736eba-5e3e-4608-8f3f-13783efb0735-kube-api-access-lctwn\") pod \"ff736eba-5e3e-4608-8f3f-13783efb0735\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.587195 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-config-data\") pod \"ff736eba-5e3e-4608-8f3f-13783efb0735\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.587301 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-scripts\") pod \"ff736eba-5e3e-4608-8f3f-13783efb0735\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.587351 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-combined-ca-bundle\") pod \"ff736eba-5e3e-4608-8f3f-13783efb0735\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.588354 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ff736eba-5e3e-4608-8f3f-13783efb0735" (UID: "ff736eba-5e3e-4608-8f3f-13783efb0735"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.588953 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ff736eba-5e3e-4608-8f3f-13783efb0735" (UID: "ff736eba-5e3e-4608-8f3f-13783efb0735"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.620584 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-scripts" (OuterVolumeSpecName: "scripts") pod "ff736eba-5e3e-4608-8f3f-13783efb0735" (UID: "ff736eba-5e3e-4608-8f3f-13783efb0735"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.629884 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff736eba-5e3e-4608-8f3f-13783efb0735-kube-api-access-lctwn" (OuterVolumeSpecName: "kube-api-access-lctwn") pod "ff736eba-5e3e-4608-8f3f-13783efb0735" (UID: "ff736eba-5e3e-4608-8f3f-13783efb0735"). InnerVolumeSpecName "kube-api-access-lctwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.673106 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ff736eba-5e3e-4608-8f3f-13783efb0735" (UID: "ff736eba-5e3e-4608-8f3f-13783efb0735"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.691320 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.691354 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.691367 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lctwn\" (UniqueName: \"kubernetes.io/projected/ff736eba-5e3e-4608-8f3f-13783efb0735-kube-api-access-lctwn\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.691383 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.691394 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.730614 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api-log" containerID="cri-o://915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67" gracePeriod=30 Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.730698 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api" containerID="cri-o://dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792" gracePeriod=30 Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.739151 4761 generic.go:334] "Generic (PLEG): container finished" podID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerID="d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b" exitCode=0 Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.744706 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.774481 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.941215584 podStartE2EDuration="5.774460834s" podCreationTimestamp="2026-03-07 08:12:50 +0000 UTC" firstStartedPulling="2026-03-07 08:12:51.406691832 +0000 UTC m=+1428.315858297" lastFinishedPulling="2026-03-07 08:12:53.239937072 +0000 UTC m=+1430.149103547" observedRunningTime="2026-03-07 08:12:55.748101318 +0000 UTC m=+1432.657267793" watchObservedRunningTime="2026-03-07 08:12:55.774460834 +0000 UTC m=+1432.683627329" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.775092 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff736eba-5e3e-4608-8f3f-13783efb0735" (UID: "ff736eba-5e3e-4608-8f3f-13783efb0735"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.780924 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.780908096 podStartE2EDuration="5.780908096s" podCreationTimestamp="2026-03-07 08:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:55.777819918 +0000 UTC m=+1432.686986393" watchObservedRunningTime="2026-03-07 08:12:55.780908096 +0000 UTC m=+1432.690074571" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.793430 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.826990 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-config-data" (OuterVolumeSpecName: "config-data") pod "ff736eba-5e3e-4608-8f3f-13783efb0735" (UID: "ff736eba-5e3e-4608-8f3f-13783efb0735"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.897964 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.942254 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"347f09d3-6f9f-4eb1-a655-02e6af151d29","Type":"ContainerStarted","Data":"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3"} Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.942324 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"347f09d3-6f9f-4eb1-a655-02e6af151d29","Type":"ContainerStarted","Data":"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e"} Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.942351 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.942385 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"785bd50e-a249-4021-83b3-ff8e33c343db","Type":"ContainerStarted","Data":"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792"} Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.942411 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerDied","Data":"d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b"} Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.942435 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerDied","Data":"da1033284673b02ff41b3d930dbfee0b2953cef69b3b38ce497df0dcfce3925a"} Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.942486 4761 scope.go:117] "RemoveContainer" containerID="599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.007084 4761 scope.go:117] "RemoveContainer" containerID="0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.030443 4761 scope.go:117] "RemoveContainer" containerID="d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.095905 4761 scope.go:117] "RemoveContainer" containerID="43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.096823 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.139191 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194074 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.194504 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-notification-agent" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194522 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-notification-agent" Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.194537 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-central-agent" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194544 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-central-agent" Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.194560 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="proxy-httpd" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194565 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="proxy-httpd" Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.194579 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="sg-core" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194584 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="sg-core" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194809 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="sg-core" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194821 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="proxy-httpd" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194831 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-central-agent" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194847 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-notification-agent" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.196766 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.204145 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.204578 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.204785 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.253424 4761 scope.go:117] "RemoveContainer" containerID="599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261" Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.254198 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261\": container with ID starting with 599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261 not found: ID does not exist" containerID="599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.254258 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261"} err="failed to get container status \"599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261\": rpc error: code = NotFound desc = could not find container \"599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261\": container with ID starting with 599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261 not found: ID does not exist" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.254310 4761 scope.go:117] "RemoveContainer" containerID="0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294" Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.254833 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294\": container with ID starting with 0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294 not found: ID does not exist" containerID="0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.254943 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294"} err="failed to get container status \"0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294\": rpc error: code = NotFound desc = could not find container \"0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294\": container with ID starting with 0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294 not found: ID does not exist" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.255022 4761 scope.go:117] "RemoveContainer" containerID="d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b" Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.257012 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b\": container with ID starting with d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b not found: ID does not exist" containerID="d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.257054 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b"} err="failed to get container status \"d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b\": rpc error: code = NotFound desc = could not find container \"d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b\": container with ID starting with d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b not found: ID does not exist" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.257080 4761 scope.go:117] "RemoveContainer" containerID="43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f" Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.257355 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f\": container with ID starting with 43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f not found: ID does not exist" containerID="43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.257376 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f"} err="failed to get container status \"43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f\": rpc error: code = NotFound desc = could not find container \"43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f\": container with ID starting with 43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f not found: ID does not exist" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.308478 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.308553 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-scripts\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.308787 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd7rk\" (UniqueName: \"kubernetes.io/projected/d7481eb8-b067-41f0-9347-7665f72b5d6a-kube-api-access-jd7rk\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.308814 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-run-httpd\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.309009 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-log-httpd\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.309129 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-config-data\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.309156 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.358485 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.410815 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data-custom\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.410889 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.410937 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411037 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/785bd50e-a249-4021-83b3-ff8e33c343db-logs\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411078 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tljw8\" (UniqueName: \"kubernetes.io/projected/785bd50e-a249-4021-83b3-ff8e33c343db-kube-api-access-tljw8\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411158 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-combined-ca-bundle\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411236 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/785bd50e-a249-4021-83b3-ff8e33c343db-etc-machine-id\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411649 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411736 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-scripts\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411935 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-run-httpd\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411963 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd7rk\" (UniqueName: \"kubernetes.io/projected/d7481eb8-b067-41f0-9347-7665f72b5d6a-kube-api-access-jd7rk\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.412010 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-log-httpd\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.412054 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-config-data\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.412080 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.413872 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-run-httpd\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.413901 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-log-httpd\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.413955 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/785bd50e-a249-4021-83b3-ff8e33c343db-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.414024 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/785bd50e-a249-4021-83b3-ff8e33c343db-logs" (OuterVolumeSpecName: "logs") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.421993 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785bd50e-a249-4021-83b3-ff8e33c343db-kube-api-access-tljw8" (OuterVolumeSpecName: "kube-api-access-tljw8") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "kube-api-access-tljw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.422497 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:56.423908 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.081300 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts" (OuterVolumeSpecName: "scripts") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.083254 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-config-data\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.093424 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd7rk\" (UniqueName: \"kubernetes.io/projected/d7481eb8-b067-41f0-9347-7665f72b5d6a-kube-api-access-jd7rk\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.094221 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:57 crc kubenswrapper[4761]: W0307 08:12:57.095072 4761 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/785bd50e-a249-4021-83b3-ff8e33c343db/volumes/kubernetes.io~secret/scripts Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.095098 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts" (OuterVolumeSpecName: "scripts") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.097760 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.102561 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-scripts\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.110895 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/785bd50e-a249-4021-83b3-ff8e33c343db-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.111233 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tljw8\" (UniqueName: \"kubernetes.io/projected/785bd50e-a249-4021-83b3-ff8e33c343db-kube-api-access-tljw8\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.111383 4761 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/785bd50e-a249-4021-83b3-ff8e33c343db-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.111405 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.111417 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.113116 4761 generic.go:334] "Generic (PLEG): container finished" podID="785bd50e-a249-4021-83b3-ff8e33c343db" containerID="dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792" exitCode=0 Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.113163 4761 generic.go:334] "Generic (PLEG): container finished" podID="785bd50e-a249-4021-83b3-ff8e33c343db" containerID="915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67" exitCode=143 Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.113170 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.113247 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"785bd50e-a249-4021-83b3-ff8e33c343db","Type":"ContainerDied","Data":"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792"} Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.113287 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"785bd50e-a249-4021-83b3-ff8e33c343db","Type":"ContainerDied","Data":"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67"} Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.113302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"785bd50e-a249-4021-83b3-ff8e33c343db","Type":"ContainerDied","Data":"6b0131ba18120a6f4dbc7831768e1497b640227ba3fde87bd285ea63104f7db7"} Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.113363 4761 scope.go:117] "RemoveContainer" containerID="dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.142533 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.188243 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.207882 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data" (OuterVolumeSpecName: "config-data") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.215059 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.215365 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.320731 4761 scope.go:117] "RemoveContainer" containerID="915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.399979 4761 scope.go:117] "RemoveContainer" containerID="dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792" Mar 07 08:12:57 crc kubenswrapper[4761]: E0307 08:12:57.402192 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792\": container with ID starting with dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792 not found: ID does not exist" containerID="dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.402223 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792"} err="failed to get container status \"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792\": rpc error: code = NotFound desc = could not find container \"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792\": container with ID starting with dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792 not found: ID does not exist" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.402244 4761 scope.go:117] "RemoveContainer" containerID="915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67" Mar 07 08:12:57 crc kubenswrapper[4761]: E0307 08:12:57.402550 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67\": container with ID starting with 915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67 not found: ID does not exist" containerID="915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.402576 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67"} err="failed to get container status \"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67\": rpc error: code = NotFound desc = could not find container \"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67\": container with ID starting with 915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67 not found: ID does not exist" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.402596 4761 scope.go:117] "RemoveContainer" containerID="dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.402794 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792"} err="failed to get container status \"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792\": rpc error: code = NotFound desc = could not find container \"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792\": container with ID starting with dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792 not found: ID does not exist" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.402809 4761 scope.go:117] "RemoveContainer" containerID="915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.402990 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67"} err="failed to get container status \"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67\": rpc error: code = NotFound desc = could not find container \"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67\": container with ID starting with 915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67 not found: ID does not exist" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.513241 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.523829 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.552561 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.640788 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:57 crc kubenswrapper[4761]: E0307 08:12:57.641319 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.641335 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api" Mar 07 08:12:57 crc kubenswrapper[4761]: E0307 08:12:57.641370 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api-log" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.641376 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api-log" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.641632 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api-log" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.641646 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.642972 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.651840 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.652011 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.652551 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.667187 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.733828 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" path="/var/lib/kubelet/pods/785bd50e-a249-4021-83b3-ff8e33c343db/volumes" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.734628 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" path="/var/lib/kubelet/pods/ff736eba-5e3e-4608-8f3f-13783efb0735/volumes" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.746123 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.828449 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42f2382e-b335-47f4-8345-8544853fb91a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829239 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj66z\" (UniqueName: \"kubernetes.io/projected/42f2382e-b335-47f4-8345-8544853fb91a-kube-api-access-kj66z\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829299 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42f2382e-b335-47f4-8345-8544853fb91a-logs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829339 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829376 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-scripts\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829406 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-config-data\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829442 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829553 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829607 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-config-data-custom\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.836750 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78b5ffc596-hnhkw"] Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.837489 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78b5ffc596-hnhkw" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api-log" containerID="cri-o://3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab" gracePeriod=30 Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.837660 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78b5ffc596-hnhkw" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api" containerID="cri-o://e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1" gracePeriod=30 Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932110 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932225 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932261 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-config-data-custom\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932355 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42f2382e-b335-47f4-8345-8544853fb91a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932431 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj66z\" (UniqueName: \"kubernetes.io/projected/42f2382e-b335-47f4-8345-8544853fb91a-kube-api-access-kj66z\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932469 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42f2382e-b335-47f4-8345-8544853fb91a-logs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932493 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932523 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-scripts\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932546 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-config-data\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.933553 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42f2382e-b335-47f4-8345-8544853fb91a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.959627 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.967698 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-config-data\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.968954 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42f2382e-b335-47f4-8345-8544853fb91a-logs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.969347 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-config-data-custom\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.969616 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj66z\" (UniqueName: \"kubernetes.io/projected/42f2382e-b335-47f4-8345-8544853fb91a-kube-api-access-kj66z\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.970267 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.975303 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.980268 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-scripts\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.997330 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 08:12:58 crc kubenswrapper[4761]: I0307 08:12:58.132829 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerStarted","Data":"1e7c1a880355ecefe14e2eb240097ae879ac23e20a08685bf21fe65599254a91"} Mar 07 08:12:58 crc kubenswrapper[4761]: I0307 08:12:58.150759 4761 generic.go:334] "Generic (PLEG): container finished" podID="20073497-107b-4d6a-9210-121d5fc67d7f" containerID="3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab" exitCode=143 Mar 07 08:12:58 crc kubenswrapper[4761]: I0307 08:12:58.150850 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b5ffc596-hnhkw" event={"ID":"20073497-107b-4d6a-9210-121d5fc67d7f","Type":"ContainerDied","Data":"3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab"} Mar 07 08:12:58 crc kubenswrapper[4761]: I0307 08:12:58.998273 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:59 crc kubenswrapper[4761]: I0307 08:12:59.170382 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"42f2382e-b335-47f4-8345-8544853fb91a","Type":"ContainerStarted","Data":"a041724acb3df1baa1c96ed8398c30fe6780c6cdc56914a3707377d7752f3641"} Mar 07 08:13:00 crc kubenswrapper[4761]: I0307 08:13:00.200455 4761 generic.go:334] "Generic (PLEG): container finished" podID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerID="838d2403b600902e213c3a5f93612e34608bca197d59c3727a1ff2eeb0d7feb7" exitCode=0 Mar 07 08:13:00 crc kubenswrapper[4761]: I0307 08:13:00.200607 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8dct" event={"ID":"3ad49ed9-8c84-4de1-830c-679262fc906d","Type":"ContainerDied","Data":"838d2403b600902e213c3a5f93612e34608bca197d59c3727a1ff2eeb0d7feb7"} Mar 07 08:13:00 crc kubenswrapper[4761]: I0307 08:13:00.482939 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 07 08:13:00 crc kubenswrapper[4761]: I0307 08:13:00.485587 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.212:8080/\": dial tcp 10.217.0.212:8080: connect: connection refused" Mar 07 08:13:00 crc kubenswrapper[4761]: I0307 08:13:00.591679 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:13:00 crc kubenswrapper[4761]: I0307 08:13:00.678817 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-82z7q"] Mar 07 08:13:00 crc kubenswrapper[4761]: I0307 08:13:00.679066 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" podUID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerName="dnsmasq-dns" containerID="cri-o://65bf03db04217df982e40e805335492498ff93bc011d82a0a0d18fa7cece75fb" gracePeriod=10 Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.219518 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerStarted","Data":"f6773242ad8f5ad66928a7bfbd4218035821add88a1d594d2b7025e1d24427f0"} Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.221750 4761 generic.go:334] "Generic (PLEG): container finished" podID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerID="65bf03db04217df982e40e805335492498ff93bc011d82a0a0d18fa7cece75fb" exitCode=0 Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.221811 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" event={"ID":"372f361d-256a-4a5b-a95d-4f3ff68e5827","Type":"ContainerDied","Data":"65bf03db04217df982e40e805335492498ff93bc011d82a0a0d18fa7cece75fb"} Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.223860 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px52h" event={"ID":"321917f1-f061-4e00-a598-2766772d2290","Type":"ContainerStarted","Data":"0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87"} Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.235053 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"42f2382e-b335-47f4-8345-8544853fb91a","Type":"ContainerStarted","Data":"9f06ccbe5d88d3ed5394cae2c7d47a6b7c28ef3f51728b9e3c8664d6d6a55f4c"} Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.457822 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78b5ffc596-hnhkw" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.208:9311/healthcheck\": read tcp 10.217.0.2:54240->10.217.0.208:9311: read: connection reset by peer" Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.457869 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78b5ffc596-hnhkw" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.208:9311/healthcheck\": read tcp 10.217.0.2:54248->10.217.0.208:9311: read: connection reset by peer" Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.933671 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.056741 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-swift-storage-0\") pod \"372f361d-256a-4a5b-a95d-4f3ff68e5827\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.056784 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-config\") pod \"372f361d-256a-4a5b-a95d-4f3ff68e5827\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.056894 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-sb\") pod \"372f361d-256a-4a5b-a95d-4f3ff68e5827\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.057035 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-nb\") pod \"372f361d-256a-4a5b-a95d-4f3ff68e5827\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.057064 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-svc\") pod \"372f361d-256a-4a5b-a95d-4f3ff68e5827\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.057083 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpblx\" (UniqueName: \"kubernetes.io/projected/372f361d-256a-4a5b-a95d-4f3ff68e5827-kube-api-access-vpblx\") pod \"372f361d-256a-4a5b-a95d-4f3ff68e5827\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.063870 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372f361d-256a-4a5b-a95d-4f3ff68e5827-kube-api-access-vpblx" (OuterVolumeSpecName: "kube-api-access-vpblx") pod "372f361d-256a-4a5b-a95d-4f3ff68e5827" (UID: "372f361d-256a-4a5b-a95d-4f3ff68e5827"). InnerVolumeSpecName "kube-api-access-vpblx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.069319 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.159330 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpblx\" (UniqueName: \"kubernetes.io/projected/372f361d-256a-4a5b-a95d-4f3ff68e5827-kube-api-access-vpblx\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.163097 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "372f361d-256a-4a5b-a95d-4f3ff68e5827" (UID: "372f361d-256a-4a5b-a95d-4f3ff68e5827"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.209145 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-config" (OuterVolumeSpecName: "config") pod "372f361d-256a-4a5b-a95d-4f3ff68e5827" (UID: "372f361d-256a-4a5b-a95d-4f3ff68e5827"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.218834 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "372f361d-256a-4a5b-a95d-4f3ff68e5827" (UID: "372f361d-256a-4a5b-a95d-4f3ff68e5827"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.239546 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "372f361d-256a-4a5b-a95d-4f3ff68e5827" (UID: "372f361d-256a-4a5b-a95d-4f3ff68e5827"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.244613 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "372f361d-256a-4a5b-a95d-4f3ff68e5827" (UID: "372f361d-256a-4a5b-a95d-4f3ff68e5827"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.261151 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data-custom\") pod \"20073497-107b-4d6a-9210-121d5fc67d7f\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.261619 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20073497-107b-4d6a-9210-121d5fc67d7f-logs\") pod \"20073497-107b-4d6a-9210-121d5fc67d7f\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.261706 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbff8\" (UniqueName: \"kubernetes.io/projected/20073497-107b-4d6a-9210-121d5fc67d7f-kube-api-access-dbff8\") pod \"20073497-107b-4d6a-9210-121d5fc67d7f\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.261910 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-combined-ca-bundle\") pod \"20073497-107b-4d6a-9210-121d5fc67d7f\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.261957 4761 generic.go:334] "Generic (PLEG): container finished" podID="321917f1-f061-4e00-a598-2766772d2290" containerID="0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87" exitCode=0 Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.261984 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data\") pod \"20073497-107b-4d6a-9210-121d5fc67d7f\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.262097 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20073497-107b-4d6a-9210-121d5fc67d7f-logs" (OuterVolumeSpecName: "logs") pod "20073497-107b-4d6a-9210-121d5fc67d7f" (UID: "20073497-107b-4d6a-9210-121d5fc67d7f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.262479 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px52h" event={"ID":"321917f1-f061-4e00-a598-2766772d2290","Type":"ContainerDied","Data":"0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87"} Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.263971 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.264343 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.264358 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.264368 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.264378 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.264387 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20073497-107b-4d6a-9210-121d5fc67d7f-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.271503 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "20073497-107b-4d6a-9210-121d5fc67d7f" (UID: "20073497-107b-4d6a-9210-121d5fc67d7f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.272532 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20073497-107b-4d6a-9210-121d5fc67d7f-kube-api-access-dbff8" (OuterVolumeSpecName: "kube-api-access-dbff8") pod "20073497-107b-4d6a-9210-121d5fc67d7f" (UID: "20073497-107b-4d6a-9210-121d5fc67d7f"). InnerVolumeSpecName "kube-api-access-dbff8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.286976 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"42f2382e-b335-47f4-8345-8544853fb91a","Type":"ContainerStarted","Data":"d53d3b7ac2636f7a068f51bf811a8f8480e254159d870733ba2a1f62b63f1ce1"} Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.288754 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.319905 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20073497-107b-4d6a-9210-121d5fc67d7f" (UID: "20073497-107b-4d6a-9210-121d5fc67d7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.320704 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerStarted","Data":"ba99a8539887bd00737654b83bfe8ca6e1811fbef44c02ee311e49fb9b5a8c3d"} Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.328748 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.328728702 podStartE2EDuration="5.328728702s" podCreationTimestamp="2026-03-07 08:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:02.318623757 +0000 UTC m=+1439.227790222" watchObservedRunningTime="2026-03-07 08:13:02.328728702 +0000 UTC m=+1439.237895177" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.339259 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.340445 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" event={"ID":"372f361d-256a-4a5b-a95d-4f3ff68e5827","Type":"ContainerDied","Data":"760377cf8d09eb0d05b2158590bcd80bfc092a167d90ae20615499b39b777451"} Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.340502 4761 scope.go:117] "RemoveContainer" containerID="65bf03db04217df982e40e805335492498ff93bc011d82a0a0d18fa7cece75fb" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.364741 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8dct" event={"ID":"3ad49ed9-8c84-4de1-830c-679262fc906d","Type":"ContainerStarted","Data":"37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd"} Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.367268 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.367293 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbff8\" (UniqueName: \"kubernetes.io/projected/20073497-107b-4d6a-9210-121d5fc67d7f-kube-api-access-dbff8\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.367305 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.374876 4761 generic.go:334] "Generic (PLEG): container finished" podID="20073497-107b-4d6a-9210-121d5fc67d7f" containerID="e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1" exitCode=0 Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.374927 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b5ffc596-hnhkw" event={"ID":"20073497-107b-4d6a-9210-121d5fc67d7f","Type":"ContainerDied","Data":"e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1"} Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.374961 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b5ffc596-hnhkw" event={"ID":"20073497-107b-4d6a-9210-121d5fc67d7f","Type":"ContainerDied","Data":"fb94c0db206c71eafed7d48986873362a51c16d0b640332f8e5ab0454d7c9925"} Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.375047 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.383026 4761 scope.go:117] "RemoveContainer" containerID="f517bc7fea3e3513f659bab2cf2fa980f3544430eb492d4d903992e483bbae35" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.403379 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data" (OuterVolumeSpecName: "config-data") pod "20073497-107b-4d6a-9210-121d5fc67d7f" (UID: "20073497-107b-4d6a-9210-121d5fc67d7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.412841 4761 scope.go:117] "RemoveContainer" containerID="e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.419799 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z8dct" podStartSLOduration=4.788390165 podStartE2EDuration="11.41977182s" podCreationTimestamp="2026-03-07 08:12:51 +0000 UTC" firstStartedPulling="2026-03-07 08:12:54.690360891 +0000 UTC m=+1431.599527366" lastFinishedPulling="2026-03-07 08:13:01.321742546 +0000 UTC m=+1438.230909021" observedRunningTime="2026-03-07 08:13:02.38251706 +0000 UTC m=+1439.291683535" watchObservedRunningTime="2026-03-07 08:13:02.41977182 +0000 UTC m=+1439.328938295" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.443465 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-82z7q"] Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.456323 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-82z7q"] Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.469853 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.507362 4761 scope.go:117] "RemoveContainer" containerID="3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.536164 4761 scope.go:117] "RemoveContainer" containerID="e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1" Mar 07 08:13:02 crc kubenswrapper[4761]: E0307 08:13:02.536878 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1\": container with ID starting with e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1 not found: ID does not exist" containerID="e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.536939 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1"} err="failed to get container status \"e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1\": rpc error: code = NotFound desc = could not find container \"e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1\": container with ID starting with e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1 not found: ID does not exist" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.536975 4761 scope.go:117] "RemoveContainer" containerID="3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab" Mar 07 08:13:02 crc kubenswrapper[4761]: E0307 08:13:02.537338 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab\": container with ID starting with 3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab not found: ID does not exist" containerID="3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.537394 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab"} err="failed to get container status \"3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab\": rpc error: code = NotFound desc = could not find container \"3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab\": container with ID starting with 3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab not found: ID does not exist" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.714822 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78b5ffc596-hnhkw"] Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.727340 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-78b5ffc596-hnhkw"] Mar 07 08:13:03 crc kubenswrapper[4761]: I0307 08:13:03.737862 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" path="/var/lib/kubelet/pods/20073497-107b-4d6a-9210-121d5fc67d7f/volumes" Mar 07 08:13:03 crc kubenswrapper[4761]: I0307 08:13:03.739263 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372f361d-256a-4a5b-a95d-4f3ff68e5827" path="/var/lib/kubelet/pods/372f361d-256a-4a5b-a95d-4f3ff68e5827/volumes" Mar 07 08:13:04 crc kubenswrapper[4761]: I0307 08:13:04.631514 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:13:04 crc kubenswrapper[4761]: I0307 08:13:04.631891 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:13:04 crc kubenswrapper[4761]: I0307 08:13:04.858419 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:13:04 crc kubenswrapper[4761]: I0307 08:13:04.859911 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:13:04 crc kubenswrapper[4761]: I0307 08:13:04.865446 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:13:04 crc kubenswrapper[4761]: I0307 08:13:04.893426 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:13:05 crc kubenswrapper[4761]: I0307 08:13:05.202580 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-548cccfb88-8f8gk"] Mar 07 08:13:05 crc kubenswrapper[4761]: I0307 08:13:05.436670 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerStarted","Data":"c325c77b42db574db4c21df4095eb6524c92395bfdd12bb74dedec271e75adc9"} Mar 07 08:13:05 crc kubenswrapper[4761]: I0307 08:13:05.447289 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px52h" event={"ID":"321917f1-f061-4e00-a598-2766772d2290","Type":"ContainerStarted","Data":"2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411"} Mar 07 08:13:05 crc kubenswrapper[4761]: I0307 08:13:05.478496 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-px52h" podStartSLOduration=3.354072882 podStartE2EDuration="13.478478421s" podCreationTimestamp="2026-03-07 08:12:52 +0000 UTC" firstStartedPulling="2026-03-07 08:12:54.6546851 +0000 UTC m=+1431.563851575" lastFinishedPulling="2026-03-07 08:13:04.779090639 +0000 UTC m=+1441.688257114" observedRunningTime="2026-03-07 08:13:05.470174642 +0000 UTC m=+1442.379341127" watchObservedRunningTime="2026-03-07 08:13:05.478478421 +0000 UTC m=+1442.387644896" Mar 07 08:13:05 crc kubenswrapper[4761]: I0307 08:13:05.728353 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:05 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:05 crc kubenswrapper[4761]: > Mar 07 08:13:05 crc kubenswrapper[4761]: I0307 08:13:05.935961 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 08:13:06 crc kubenswrapper[4761]: I0307 08:13:06.455968 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-548cccfb88-8f8gk" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-log" containerID="cri-o://9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b" gracePeriod=30 Mar 07 08:13:06 crc kubenswrapper[4761]: I0307 08:13:06.456001 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-548cccfb88-8f8gk" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-api" containerID="cri-o://81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b" gracePeriod=30 Mar 07 08:13:07 crc kubenswrapper[4761]: I0307 08:13:07.482986 4761 generic.go:334] "Generic (PLEG): container finished" podID="befe03c6-a479-47be-a462-d94a93217344" containerID="9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b" exitCode=143 Mar 07 08:13:07 crc kubenswrapper[4761]: I0307 08:13:07.483164 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548cccfb88-8f8gk" event={"ID":"befe03c6-a479-47be-a462-d94a93217344","Type":"ContainerDied","Data":"9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b"} Mar 07 08:13:07 crc kubenswrapper[4761]: I0307 08:13:07.488123 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerStarted","Data":"213d483fa167f3e6c93de90e4309ca1493c59717e0a9530f798884a133193c58"} Mar 07 08:13:07 crc kubenswrapper[4761]: I0307 08:13:07.489893 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:13:07 crc kubenswrapper[4761]: I0307 08:13:07.518206 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.266100341 podStartE2EDuration="11.518185953s" podCreationTimestamp="2026-03-07 08:12:56 +0000 UTC" firstStartedPulling="2026-03-07 08:12:57.536879486 +0000 UTC m=+1434.446045961" lastFinishedPulling="2026-03-07 08:13:06.788965098 +0000 UTC m=+1443.698131573" observedRunningTime="2026-03-07 08:13:07.513289339 +0000 UTC m=+1444.422455824" watchObservedRunningTime="2026-03-07 08:13:07.518185953 +0000 UTC m=+1444.427352428" Mar 07 08:13:08 crc kubenswrapper[4761]: I0307 08:13:08.550259 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.431221 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 07 08:13:09 crc kubenswrapper[4761]: E0307 08:13:09.431973 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.431992 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api" Mar 07 08:13:09 crc kubenswrapper[4761]: E0307 08:13:09.432010 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerName="dnsmasq-dns" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.432016 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerName="dnsmasq-dns" Mar 07 08:13:09 crc kubenswrapper[4761]: E0307 08:13:09.432038 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerName="init" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.432044 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerName="init" Mar 07 08:13:09 crc kubenswrapper[4761]: E0307 08:13:09.432051 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api-log" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.432057 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api-log" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.432278 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerName="dnsmasq-dns" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.432289 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api-log" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.432308 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.433094 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.435993 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.439769 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.439966 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mpnsr" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.448876 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.547962 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/212a33ff-09a0-4654-adff-687f8d9145a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.548018 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vlwd\" (UniqueName: \"kubernetes.io/projected/212a33ff-09a0-4654-adff-687f8d9145a6-kube-api-access-6vlwd\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.548278 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212a33ff-09a0-4654-adff-687f8d9145a6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.548329 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/212a33ff-09a0-4654-adff-687f8d9145a6-openstack-config\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.650021 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/212a33ff-09a0-4654-adff-687f8d9145a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.650067 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vlwd\" (UniqueName: \"kubernetes.io/projected/212a33ff-09a0-4654-adff-687f8d9145a6-kube-api-access-6vlwd\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.650176 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212a33ff-09a0-4654-adff-687f8d9145a6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.650212 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/212a33ff-09a0-4654-adff-687f8d9145a6-openstack-config\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.651160 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/212a33ff-09a0-4654-adff-687f8d9145a6-openstack-config\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.661400 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212a33ff-09a0-4654-adff-687f8d9145a6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.667212 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/212a33ff-09a0-4654-adff-687f8d9145a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.672964 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vlwd\" (UniqueName: \"kubernetes.io/projected/212a33ff-09a0-4654-adff-687f8d9145a6-kube-api-access-6vlwd\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.756891 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.296734 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.432902 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.480802 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befe03c6-a479-47be-a462-d94a93217344-logs\") pod \"befe03c6-a479-47be-a462-d94a93217344\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.480889 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-config-data\") pod \"befe03c6-a479-47be-a462-d94a93217344\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.480911 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-combined-ca-bundle\") pod \"befe03c6-a479-47be-a462-d94a93217344\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.481008 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-public-tls-certs\") pod \"befe03c6-a479-47be-a462-d94a93217344\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.481106 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-scripts\") pod \"befe03c6-a479-47be-a462-d94a93217344\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.481154 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk987\" (UniqueName: \"kubernetes.io/projected/befe03c6-a479-47be-a462-d94a93217344-kube-api-access-qk987\") pod \"befe03c6-a479-47be-a462-d94a93217344\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.481276 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-internal-tls-certs\") pod \"befe03c6-a479-47be-a462-d94a93217344\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.482288 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/befe03c6-a479-47be-a462-d94a93217344-logs" (OuterVolumeSpecName: "logs") pod "befe03c6-a479-47be-a462-d94a93217344" (UID: "befe03c6-a479-47be-a462-d94a93217344"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.489917 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-scripts" (OuterVolumeSpecName: "scripts") pod "befe03c6-a479-47be-a462-d94a93217344" (UID: "befe03c6-a479-47be-a462-d94a93217344"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.491306 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/befe03c6-a479-47be-a462-d94a93217344-kube-api-access-qk987" (OuterVolumeSpecName: "kube-api-access-qk987") pod "befe03c6-a479-47be-a462-d94a93217344" (UID: "befe03c6-a479-47be-a462-d94a93217344"). InnerVolumeSpecName "kube-api-access-qk987". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.502708 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.539094 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"212a33ff-09a0-4654-adff-687f8d9145a6","Type":"ContainerStarted","Data":"0c49076830b8c3aba59503de7ee1cb9c6cbf9662e37999e76a1ae1181790ec16"} Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.547775 4761 generic.go:334] "Generic (PLEG): container finished" podID="befe03c6-a479-47be-a462-d94a93217344" containerID="81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b" exitCode=0 Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.547954 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.548508 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548cccfb88-8f8gk" event={"ID":"befe03c6-a479-47be-a462-d94a93217344","Type":"ContainerDied","Data":"81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b"} Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.548613 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548cccfb88-8f8gk" event={"ID":"befe03c6-a479-47be-a462-d94a93217344","Type":"ContainerDied","Data":"86c8561318980ddc9b03e998f8c8e8c8ed4238129497411ddba218873461884d"} Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.548637 4761 scope.go:117] "RemoveContainer" containerID="81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.584349 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befe03c6-a479-47be-a462-d94a93217344-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.584377 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.584386 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk987\" (UniqueName: \"kubernetes.io/projected/befe03c6-a479-47be-a462-d94a93217344-kube-api-access-qk987\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.588668 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.600137 4761 scope.go:117] "RemoveContainer" containerID="9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.600293 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="cinder-scheduler" containerID="cri-o://42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e" gracePeriod=30 Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.600532 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="probe" containerID="cri-o://6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3" gracePeriod=30 Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.617917 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-config-data" (OuterVolumeSpecName: "config-data") pod "befe03c6-a479-47be-a462-d94a93217344" (UID: "befe03c6-a479-47be-a462-d94a93217344"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.623824 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "befe03c6-a479-47be-a462-d94a93217344" (UID: "befe03c6-a479-47be-a462-d94a93217344"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.633813 4761 scope.go:117] "RemoveContainer" containerID="81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b" Mar 07 08:13:10 crc kubenswrapper[4761]: E0307 08:13:10.634322 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b\": container with ID starting with 81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b not found: ID does not exist" containerID="81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.634365 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b"} err="failed to get container status \"81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b\": rpc error: code = NotFound desc = could not find container \"81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b\": container with ID starting with 81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b not found: ID does not exist" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.634390 4761 scope.go:117] "RemoveContainer" containerID="9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b" Mar 07 08:13:10 crc kubenswrapper[4761]: E0307 08:13:10.634810 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b\": container with ID starting with 9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b not found: ID does not exist" containerID="9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.634852 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b"} err="failed to get container status \"9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b\": rpc error: code = NotFound desc = could not find container \"9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b\": container with ID starting with 9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b not found: ID does not exist" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.650127 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "befe03c6-a479-47be-a462-d94a93217344" (UID: "befe03c6-a479-47be-a462-d94a93217344"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.661661 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "befe03c6-a479-47be-a462-d94a93217344" (UID: "befe03c6-a479-47be-a462-d94a93217344"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.686550 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.686584 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.686595 4761 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.686603 4761 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.890968 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-548cccfb88-8f8gk"] Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.902556 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-548cccfb88-8f8gk"] Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.946871 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 07 08:13:11 crc kubenswrapper[4761]: I0307 08:13:11.750566 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="befe03c6-a479-47be-a462-d94a93217344" path="/var/lib/kubelet/pods/befe03c6-a479-47be-a462-d94a93217344/volumes" Mar 07 08:13:11 crc kubenswrapper[4761]: E0307 08:13:11.786597 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod347f09d3_6f9f_4eb1_a655_02e6af151d29.slice/crio-6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod347f09d3_6f9f_4eb1_a655_02e6af151d29.slice/crio-conmon-42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod347f09d3_6f9f_4eb1_a655_02e6af151d29.slice/crio-conmon-6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3.scope\": RecentStats: unable to find data in memory cache]" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.051907 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.052821 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.229696 4761 scope.go:117] "RemoveContainer" containerID="968fafc69d37a3fd58309d6988cdcb39d53648dbd54cc347939d1e9351949eab" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.477352 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.586901 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.587327 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.600053 4761 generic.go:334] "Generic (PLEG): container finished" podID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerID="6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3" exitCode=0 Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.600085 4761 generic.go:334] "Generic (PLEG): container finished" podID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerID="42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e" exitCode=0 Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.600819 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"347f09d3-6f9f-4eb1-a655-02e6af151d29","Type":"ContainerDied","Data":"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3"} Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.600886 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"347f09d3-6f9f-4eb1-a655-02e6af151d29","Type":"ContainerDied","Data":"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e"} Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.600908 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"347f09d3-6f9f-4eb1-a655-02e6af151d29","Type":"ContainerDied","Data":"9ebd8fc7adeeaf09d3d06b2d65d582f2062424f6e484b1bb1b5c97a9e8444be2"} Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.600954 4761 scope.go:117] "RemoveContainer" containerID="6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.603393 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.652472 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9b8l\" (UniqueName: \"kubernetes.io/projected/347f09d3-6f9f-4eb1-a655-02e6af151d29-kube-api-access-z9b8l\") pod \"347f09d3-6f9f-4eb1-a655-02e6af151d29\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.652594 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-combined-ca-bundle\") pod \"347f09d3-6f9f-4eb1-a655-02e6af151d29\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.652630 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-scripts\") pod \"347f09d3-6f9f-4eb1-a655-02e6af151d29\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.652661 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data-custom\") pod \"347f09d3-6f9f-4eb1-a655-02e6af151d29\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.652862 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data\") pod \"347f09d3-6f9f-4eb1-a655-02e6af151d29\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.652878 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/347f09d3-6f9f-4eb1-a655-02e6af151d29-etc-machine-id\") pod \"347f09d3-6f9f-4eb1-a655-02e6af151d29\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.653410 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347f09d3-6f9f-4eb1-a655-02e6af151d29-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "347f09d3-6f9f-4eb1-a655-02e6af151d29" (UID: "347f09d3-6f9f-4eb1-a655-02e6af151d29"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.665245 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347f09d3-6f9f-4eb1-a655-02e6af151d29-kube-api-access-z9b8l" (OuterVolumeSpecName: "kube-api-access-z9b8l") pod "347f09d3-6f9f-4eb1-a655-02e6af151d29" (UID: "347f09d3-6f9f-4eb1-a655-02e6af151d29"). InnerVolumeSpecName "kube-api-access-z9b8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.673737 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-scripts" (OuterVolumeSpecName: "scripts") pod "347f09d3-6f9f-4eb1-a655-02e6af151d29" (UID: "347f09d3-6f9f-4eb1-a655-02e6af151d29"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.690902 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "347f09d3-6f9f-4eb1-a655-02e6af151d29" (UID: "347f09d3-6f9f-4eb1-a655-02e6af151d29"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.705077 4761 scope.go:117] "RemoveContainer" containerID="42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.740340 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "347f09d3-6f9f-4eb1-a655-02e6af151d29" (UID: "347f09d3-6f9f-4eb1-a655-02e6af151d29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.756343 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9b8l\" (UniqueName: \"kubernetes.io/projected/347f09d3-6f9f-4eb1-a655-02e6af151d29-kube-api-access-z9b8l\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.756395 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.756406 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.756414 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.756422 4761 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/347f09d3-6f9f-4eb1-a655-02e6af151d29-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.787909 4761 scope.go:117] "RemoveContainer" containerID="6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3" Mar 07 08:13:12 crc kubenswrapper[4761]: E0307 08:13:12.788512 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3\": container with ID starting with 6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3 not found: ID does not exist" containerID="6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.788550 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3"} err="failed to get container status \"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3\": rpc error: code = NotFound desc = could not find container \"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3\": container with ID starting with 6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3 not found: ID does not exist" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.788576 4761 scope.go:117] "RemoveContainer" containerID="42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e" Mar 07 08:13:12 crc kubenswrapper[4761]: E0307 08:13:12.788891 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e\": container with ID starting with 42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e not found: ID does not exist" containerID="42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.788912 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e"} err="failed to get container status \"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e\": rpc error: code = NotFound desc = could not find container \"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e\": container with ID starting with 42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e not found: ID does not exist" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.788924 4761 scope.go:117] "RemoveContainer" containerID="6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.789114 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3"} err="failed to get container status \"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3\": rpc error: code = NotFound desc = could not find container \"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3\": container with ID starting with 6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3 not found: ID does not exist" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.789134 4761 scope.go:117] "RemoveContainer" containerID="42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.789292 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e"} err="failed to get container status \"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e\": rpc error: code = NotFound desc = could not find container \"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e\": container with ID starting with 42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e not found: ID does not exist" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.821258 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data" (OuterVolumeSpecName: "config-data") pod "347f09d3-6f9f-4eb1-a655-02e6af151d29" (UID: "347f09d3-6f9f-4eb1-a655-02e6af151d29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.859045 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.949213 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.980761 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.008638 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:13:13 crc kubenswrapper[4761]: E0307 08:13:13.012245 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-log" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.012541 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-log" Mar 07 08:13:13 crc kubenswrapper[4761]: E0307 08:13:13.012633 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="cinder-scheduler" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.012730 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="cinder-scheduler" Mar 07 08:13:13 crc kubenswrapper[4761]: E0307 08:13:13.012837 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-api" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.012922 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-api" Mar 07 08:13:13 crc kubenswrapper[4761]: E0307 08:13:13.013038 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="probe" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.013119 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="probe" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.013469 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-api" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.013608 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="probe" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.013701 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="cinder-scheduler" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.013910 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-log" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.015553 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.018281 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.027100 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.166195 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.166439 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-scripts\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.166539 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ab7bc1-753e-437c-bd70-130581863fde-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.166563 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xvgm\" (UniqueName: \"kubernetes.io/projected/69ab7bc1-753e-437c-bd70-130581863fde-kube-api-access-8xvgm\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.166582 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-config-data\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.166633 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.212780 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-z8dct" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:13 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:13 crc kubenswrapper[4761]: > Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.269171 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ab7bc1-753e-437c-bd70-130581863fde-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.269238 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xvgm\" (UniqueName: \"kubernetes.io/projected/69ab7bc1-753e-437c-bd70-130581863fde-kube-api-access-8xvgm\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.269262 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-config-data\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.269360 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.269477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.269510 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-scripts\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.269897 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ab7bc1-753e-437c-bd70-130581863fde-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.277768 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-config-data\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.281358 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.281926 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-scripts\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.282993 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.298014 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xvgm\" (UniqueName: \"kubernetes.io/projected/69ab7bc1-753e-437c-bd70-130581863fde-kube-api-access-8xvgm\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.341061 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.660180 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-px52h" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:13 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:13 crc kubenswrapper[4761]: > Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.740616 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" path="/var/lib/kubelet/pods/347f09d3-6f9f-4eb1-a655-02e6af151d29/volumes" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.874890 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:13:13 crc kubenswrapper[4761]: W0307 08:13:13.886989 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69ab7bc1_753e_437c_bd70_130581863fde.slice/crio-f2d406aa6461842f50d7fff32d70cbdac48672fdcd51b5c2b7f7e1b67d1b262e WatchSource:0}: Error finding container f2d406aa6461842f50d7fff32d70cbdac48672fdcd51b5c2b7f7e1b67d1b262e: Status 404 returned error can't find the container with id f2d406aa6461842f50d7fff32d70cbdac48672fdcd51b5c2b7f7e1b67d1b262e Mar 07 08:13:14 crc kubenswrapper[4761]: I0307 08:13:14.645980 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"69ab7bc1-753e-437c-bd70-130581863fde","Type":"ContainerStarted","Data":"f2d406aa6461842f50d7fff32d70cbdac48672fdcd51b5c2b7f7e1b67d1b262e"} Mar 07 08:13:14 crc kubenswrapper[4761]: I0307 08:13:14.905080 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:13:15 crc kubenswrapper[4761]: I0307 08:13:15.011744 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-57b6497888-fkqsr"] Mar 07 08:13:15 crc kubenswrapper[4761]: I0307 08:13:15.016013 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-57b6497888-fkqsr" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-api" containerID="cri-o://08a531bbea56745d96bf5808403f6dcf83e2a3f8d100ed5e64c06f8c0c91449a" gracePeriod=30 Mar 07 08:13:15 crc kubenswrapper[4761]: I0307 08:13:15.016213 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-57b6497888-fkqsr" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-httpd" containerID="cri-o://4543315a954e687de718677391705bbdbd0406d681cb87f746858f8b56f4bc7b" gracePeriod=30 Mar 07 08:13:15 crc kubenswrapper[4761]: I0307 08:13:15.704920 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"69ab7bc1-753e-437c-bd70-130581863fde","Type":"ContainerStarted","Data":"d1593393ea8982a1ba24a2a7870fa9fc1e67f00e525f221c8b96901d677b86a6"} Mar 07 08:13:15 crc kubenswrapper[4761]: I0307 08:13:15.751195 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:15 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:15 crc kubenswrapper[4761]: > Mar 07 08:13:15 crc kubenswrapper[4761]: I0307 08:13:15.770125 4761 generic.go:334] "Generic (PLEG): container finished" podID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerID="4543315a954e687de718677391705bbdbd0406d681cb87f746858f8b56f4bc7b" exitCode=0 Mar 07 08:13:15 crc kubenswrapper[4761]: I0307 08:13:15.770185 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57b6497888-fkqsr" event={"ID":"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a","Type":"ContainerDied","Data":"4543315a954e687de718677391705bbdbd0406d681cb87f746858f8b56f4bc7b"} Mar 07 08:13:16 crc kubenswrapper[4761]: I0307 08:13:16.782111 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"69ab7bc1-753e-437c-bd70-130581863fde","Type":"ContainerStarted","Data":"89aece0cc3b63fe2606d83b93fc988c807dc617398ab19ce836a00ebe670ed87"} Mar 07 08:13:16 crc kubenswrapper[4761]: I0307 08:13:16.814587 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.814566172 podStartE2EDuration="4.814566172s" podCreationTimestamp="2026-03-07 08:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:16.80500669 +0000 UTC m=+1453.714173165" watchObservedRunningTime="2026-03-07 08:13:16.814566172 +0000 UTC m=+1453.723732667" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.367770 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-fc87bd775-l8cjx"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.369968 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.376291 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.376349 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.376297 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-8k8rs" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.424472 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-fc87bd775-l8cjx"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.493410 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7f7585cb88-jshvv"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.495625 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.503384 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f7585cb88-jshvv"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.503942 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.535097 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data-custom\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.535231 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.535282 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-combined-ca-bundle\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.535309 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckcgq\" (UniqueName: \"kubernetes.io/projected/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-kube-api-access-ckcgq\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.541418 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-965pw"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.543240 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.574331 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-965pw"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.603755 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6f94956c9f-xbq22"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.605127 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.609860 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637027 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637143 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-combined-ca-bundle\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637185 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637229 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637257 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rshhg\" (UniqueName: \"kubernetes.io/projected/17f15fe3-9df7-4bd6-8bca-d357f52e458d-kube-api-access-rshhg\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637285 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637312 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-combined-ca-bundle\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637338 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9slj\" (UniqueName: \"kubernetes.io/projected/3fa2e962-e967-40fc-b5e5-4ae20c68a139-kube-api-access-m9slj\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637366 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckcgq\" (UniqueName: \"kubernetes.io/projected/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-kube-api-access-ckcgq\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637396 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-config\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637506 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637531 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data-custom\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637569 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data-custom\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637602 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.668388 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.676739 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-combined-ca-bundle\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.677008 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data-custom\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.678764 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f94956c9f-xbq22"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.706628 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckcgq\" (UniqueName: \"kubernetes.io/projected/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-kube-api-access-ckcgq\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.740234 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-combined-ca-bundle\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.740341 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.740368 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rshhg\" (UniqueName: \"kubernetes.io/projected/17f15fe3-9df7-4bd6-8bca-d357f52e458d-kube-api-access-rshhg\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.740540 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742276 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9slj\" (UniqueName: \"kubernetes.io/projected/3fa2e962-e967-40fc-b5e5-4ae20c68a139-kube-api-access-m9slj\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742333 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-config\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742514 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data-custom\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742576 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-combined-ca-bundle\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742655 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljfgw\" (UniqueName: \"kubernetes.io/projected/19b5d822-117e-4890-9ef2-6e75fc9a5c98-kube-api-access-ljfgw\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742760 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742873 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data-custom\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742926 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742968 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.743009 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.744442 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.746282 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.746936 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-config\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.746941 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.747420 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.757540 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.757652 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data-custom\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.759369 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-combined-ca-bundle\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.771603 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rshhg\" (UniqueName: \"kubernetes.io/projected/17f15fe3-9df7-4bd6-8bca-d357f52e458d-kube-api-access-rshhg\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.792270 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9slj\" (UniqueName: \"kubernetes.io/projected/3fa2e962-e967-40fc-b5e5-4ae20c68a139-kube-api-access-m9slj\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.855059 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.864772 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data-custom\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.864887 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-combined-ca-bundle\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.865071 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljfgw\" (UniqueName: \"kubernetes.io/projected/19b5d822-117e-4890-9ef2-6e75fc9a5c98-kube-api-access-ljfgw\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.865175 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.887007 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-858bf88ddc-crlf2"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.895432 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.901080 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data-custom\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.906304 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.906537 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.906754 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.909636 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.917827 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-combined-ca-bundle\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.924125 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.937594 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljfgw\" (UniqueName: \"kubernetes.io/projected/19b5d822-117e-4890-9ef2-6e75fc9a5c98-kube-api-access-ljfgw\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.958686 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:17.996558 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-858bf88ddc-crlf2"] Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:17.998888 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009098 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-internal-tls-certs\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009176 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-log-httpd\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009206 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn4p4\" (UniqueName: \"kubernetes.io/projected/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-kube-api-access-wn4p4\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009277 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-combined-ca-bundle\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009335 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-run-httpd\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009367 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-config-data\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009393 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-public-tls-certs\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009431 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-etc-swift\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116029 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-combined-ca-bundle\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116105 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-run-httpd\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116148 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-config-data\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116174 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-public-tls-certs\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116207 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-etc-swift\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116266 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-internal-tls-certs\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116307 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-log-httpd\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116329 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn4p4\" (UniqueName: \"kubernetes.io/projected/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-kube-api-access-wn4p4\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.117074 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-run-httpd\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.124354 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-combined-ca-bundle\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.125435 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-etc-swift\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.129313 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-log-httpd\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.130361 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-config-data\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.136203 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-internal-tls-certs\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.144572 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-public-tls-certs\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.163937 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn4p4\" (UniqueName: \"kubernetes.io/projected/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-kube-api-access-wn4p4\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.340427 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.343083 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.880003 4761 generic.go:334] "Generic (PLEG): container finished" podID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerID="08a531bbea56745d96bf5808403f6dcf83e2a3f8d100ed5e64c06f8c0c91449a" exitCode=0 Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.881314 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57b6497888-fkqsr" event={"ID":"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a","Type":"ContainerDied","Data":"08a531bbea56745d96bf5808403f6dcf83e2a3f8d100ed5e64c06f8c0c91449a"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.164775 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f7585cb88-jshvv"] Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.213851 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-965pw"] Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.266781 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f94956c9f-xbq22"] Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.307454 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-fc87bd775-l8cjx"] Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.456319 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.579341 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-httpd-config\") pod \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.581135 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-ovndb-tls-certs\") pod \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.581216 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-combined-ca-bundle\") pod \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.581326 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvwj5\" (UniqueName: \"kubernetes.io/projected/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-kube-api-access-dvwj5\") pod \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.581427 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-config\") pod \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.596423 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-kube-api-access-dvwj5" (OuterVolumeSpecName: "kube-api-access-dvwj5") pod "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" (UID: "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a"). InnerVolumeSpecName "kube-api-access-dvwj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.598985 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" (UID: "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.626570 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-858bf88ddc-crlf2"] Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.686532 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvwj5\" (UniqueName: \"kubernetes.io/projected/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-kube-api-access-dvwj5\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.686560 4761 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.779436 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" (UID: "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.780374 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" (UID: "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.788910 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.788941 4761 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.804470 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-config" (OuterVolumeSpecName: "config") pod "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" (UID: "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.900870 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.970775 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-858bf88ddc-crlf2" event={"ID":"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec","Type":"ContainerStarted","Data":"3213e5617d35580cfda1624ffdaf02f802339d2255a8f1b26e58eb44a22f2121"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.970820 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.970841 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-fc87bd775-l8cjx" event={"ID":"26d13a5f-64b5-41e8-a74f-1c46a4f38dad","Type":"ContainerStarted","Data":"d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.970851 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-fc87bd775-l8cjx" event={"ID":"26d13a5f-64b5-41e8-a74f-1c46a4f38dad","Type":"ContainerStarted","Data":"18c8621e9c8c6855be61ecdbb44efba4c8635cfc8bca4aede6e4347459299c55"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.972259 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-fc87bd775-l8cjx" podStartSLOduration=2.972248361 podStartE2EDuration="2.972248361s" podCreationTimestamp="2026-03-07 08:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:19.959743005 +0000 UTC m=+1456.868909480" watchObservedRunningTime="2026-03-07 08:13:19.972248361 +0000 UTC m=+1456.881414836" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.973988 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57b6497888-fkqsr" event={"ID":"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a","Type":"ContainerDied","Data":"fce39649108dd6c35261761b0b230664523883842506b7bc99ece68767f72a5f"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.974049 4761 scope.go:117] "RemoveContainer" containerID="4543315a954e687de718677391705bbdbd0406d681cb87f746858f8b56f4bc7b" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.974279 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.977317 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f94956c9f-xbq22" event={"ID":"19b5d822-117e-4890-9ef2-6e75fc9a5c98","Type":"ContainerStarted","Data":"389c6e57a5ddd4c58896f736a625938e5c1131cab1dad08fa30ca3830ba2988c"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.987494 4761 generic.go:334] "Generic (PLEG): container finished" podID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerID="1ecae72867ce15c7a0313b5c34b8ca58e83a3ffff4e98873805434f8cbe5b2e6" exitCode=0 Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.987591 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" event={"ID":"3fa2e962-e967-40fc-b5e5-4ae20c68a139","Type":"ContainerDied","Data":"1ecae72867ce15c7a0313b5c34b8ca58e83a3ffff4e98873805434f8cbe5b2e6"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.987615 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" event={"ID":"3fa2e962-e967-40fc-b5e5-4ae20c68a139","Type":"ContainerStarted","Data":"c7f9427f615055e9a18c9397a7d87a5785d5dcd67c8486de7249009393b28b5e"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.997518 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" event={"ID":"17f15fe3-9df7-4bd6-8bca-d357f52e458d","Type":"ContainerStarted","Data":"ee20e1ad7fe019aab6b30fb6ddce84ad330e4fdb063fd7c00b7444e8795a600b"} Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.087701 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-57b6497888-fkqsr"] Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.093851 4761 scope.go:117] "RemoveContainer" containerID="08a531bbea56745d96bf5808403f6dcf83e2a3f8d100ed5e64c06f8c0c91449a" Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.120425 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-57b6497888-fkqsr"] Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.297353 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-8dtv6"] Mar 07 08:13:20 crc kubenswrapper[4761]: E0307 08:13:20.297877 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-api" Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.297888 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-api" Mar 07 08:13:20 crc kubenswrapper[4761]: E0307 08:13:20.297928 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-httpd" Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.297934 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-httpd" Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.298149 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-api" Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.298172 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-httpd" Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.298952 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.328172 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a467587-eec2-4610-af1d-e666203cdddb-operator-scripts\") pod \"nova-api-db-create-8dtv6\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.328242 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9qbh\" (UniqueName: \"kubernetes.io/projected/9a467587-eec2-4610-af1d-e666203cdddb-kube-api-access-j9qbh\") pod \"nova-api-db-create-8dtv6\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.328684 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8dtv6"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.421182 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-pw6jj"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.424764 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.437013 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a467587-eec2-4610-af1d-e666203cdddb-operator-scripts\") pod \"nova-api-db-create-8dtv6\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.444384 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a467587-eec2-4610-af1d-e666203cdddb-operator-scripts\") pod \"nova-api-db-create-8dtv6\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.457017 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9qbh\" (UniqueName: \"kubernetes.io/projected/9a467587-eec2-4610-af1d-e666203cdddb-kube-api-access-j9qbh\") pod \"nova-api-db-create-8dtv6\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.501287 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9qbh\" (UniqueName: \"kubernetes.io/projected/9a467587-eec2-4610-af1d-e666203cdddb-kube-api-access-j9qbh\") pod \"nova-api-db-create-8dtv6\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.511806 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pw6jj"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.582203 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2142964f-61fc-4ae0-af75-f6a72e968294-operator-scripts\") pod \"nova-cell0-db-create-pw6jj\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.582396 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxntb\" (UniqueName: \"kubernetes.io/projected/2142964f-61fc-4ae0-af75-f6a72e968294-kube-api-access-zxntb\") pod \"nova-cell0-db-create-pw6jj\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.601945 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-69bc-account-create-update-jxq5h"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.603577 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.605812 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.622509 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-69bc-account-create-update-jxq5h"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.643077 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-9vzc2"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.649811 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.668047 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.681174 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9vzc2"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.688452 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxntb\" (UniqueName: \"kubernetes.io/projected/2142964f-61fc-4ae0-af75-f6a72e968294-kube-api-access-zxntb\") pod \"nova-cell0-db-create-pw6jj\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.688597 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/803bf161-8aed-4d86-bb34-7664bfa5a21d-operator-scripts\") pod \"nova-api-69bc-account-create-update-jxq5h\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.689477 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xgs7\" (UniqueName: \"kubernetes.io/projected/803bf161-8aed-4d86-bb34-7664bfa5a21d-kube-api-access-8xgs7\") pod \"nova-api-69bc-account-create-update-jxq5h\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.689552 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2142964f-61fc-4ae0-af75-f6a72e968294-operator-scripts\") pod \"nova-cell0-db-create-pw6jj\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.690343 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2142964f-61fc-4ae0-af75-f6a72e968294-operator-scripts\") pod \"nova-cell0-db-create-pw6jj\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.711564 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxntb\" (UniqueName: \"kubernetes.io/projected/2142964f-61fc-4ae0-af75-f6a72e968294-kube-api-access-zxntb\") pod \"nova-cell0-db-create-pw6jj\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.757995 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-172f-account-create-update-cmtmp"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.762125 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.768156 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.773289 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.787098 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-172f-account-create-update-cmtmp"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.791749 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xgs7\" (UniqueName: \"kubernetes.io/projected/803bf161-8aed-4d86-bb34-7664bfa5a21d-kube-api-access-8xgs7\") pod \"nova-api-69bc-account-create-update-jxq5h\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.791816 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eaf7dcd-b827-450a-8ac6-9953588f7697-operator-scripts\") pod \"nova-cell1-db-create-9vzc2\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.792097 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4597\" (UniqueName: \"kubernetes.io/projected/2eaf7dcd-b827-450a-8ac6-9953588f7697-kube-api-access-b4597\") pod \"nova-cell1-db-create-9vzc2\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.792271 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/803bf161-8aed-4d86-bb34-7664bfa5a21d-operator-scripts\") pod \"nova-api-69bc-account-create-update-jxq5h\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.793632 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/803bf161-8aed-4d86-bb34-7664bfa5a21d-operator-scripts\") pod \"nova-api-69bc-account-create-update-jxq5h\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.820345 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xgs7\" (UniqueName: \"kubernetes.io/projected/803bf161-8aed-4d86-bb34-7664bfa5a21d-kube-api-access-8xgs7\") pod \"nova-api-69bc-account-create-update-jxq5h\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.891438 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-79a2-account-create-update-dj29x"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.893041 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.894969 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f77b840-931c-4b69-a2e4-23c7bf19f14e-operator-scripts\") pod \"nova-cell0-172f-account-create-update-cmtmp\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.896914 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.899908 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4597\" (UniqueName: \"kubernetes.io/projected/2eaf7dcd-b827-450a-8ac6-9953588f7697-kube-api-access-b4597\") pod \"nova-cell1-db-create-9vzc2\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.900178 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eaf7dcd-b827-450a-8ac6-9953588f7697-operator-scripts\") pod \"nova-cell1-db-create-9vzc2\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.900311 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxwss\" (UniqueName: \"kubernetes.io/projected/9f77b840-931c-4b69-a2e4-23c7bf19f14e-kube-api-access-bxwss\") pod \"nova-cell0-172f-account-create-update-cmtmp\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.902619 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eaf7dcd-b827-450a-8ac6-9953588f7697-operator-scripts\") pod \"nova-cell1-db-create-9vzc2\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.904806 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-79a2-account-create-update-dj29x"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.918961 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4597\" (UniqueName: \"kubernetes.io/projected/2eaf7dcd-b827-450a-8ac6-9953588f7697-kube-api-access-b4597\") pod \"nova-cell1-db-create-9vzc2\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.924356 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.962824 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.002435 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxwss\" (UniqueName: \"kubernetes.io/projected/9f77b840-931c-4b69-a2e4-23c7bf19f14e-kube-api-access-bxwss\") pod \"nova-cell0-172f-account-create-update-cmtmp\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.002524 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f77b840-931c-4b69-a2e4-23c7bf19f14e-operator-scripts\") pod \"nova-cell0-172f-account-create-update-cmtmp\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.002662 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg947\" (UniqueName: \"kubernetes.io/projected/856a8ecd-1cf0-4150-9527-c457571785bd-kube-api-access-qg947\") pod \"nova-cell1-79a2-account-create-update-dj29x\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.002739 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/856a8ecd-1cf0-4150-9527-c457571785bd-operator-scripts\") pod \"nova-cell1-79a2-account-create-update-dj29x\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.019834 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f77b840-931c-4b69-a2e4-23c7bf19f14e-operator-scripts\") pod \"nova-cell0-172f-account-create-update-cmtmp\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.025821 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxwss\" (UniqueName: \"kubernetes.io/projected/9f77b840-931c-4b69-a2e4-23c7bf19f14e-kube-api-access-bxwss\") pod \"nova-cell0-172f-account-create-update-cmtmp\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.066025 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-858bf88ddc-crlf2" event={"ID":"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec","Type":"ContainerStarted","Data":"8ff1a07117dc387414848ac0774d1192b24f88909fe4e7b23e99a96b9198a3b0"} Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.114825 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" event={"ID":"3fa2e962-e967-40fc-b5e5-4ae20c68a139","Type":"ContainerStarted","Data":"b8012e41217590ca3360af9b406c062750b0e98b8b0bc957f29f8f2fff4b4956"} Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.114885 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.116205 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg947\" (UniqueName: \"kubernetes.io/projected/856a8ecd-1cf0-4150-9527-c457571785bd-kube-api-access-qg947\") pod \"nova-cell1-79a2-account-create-update-dj29x\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.116264 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/856a8ecd-1cf0-4150-9527-c457571785bd-operator-scripts\") pod \"nova-cell1-79a2-account-create-update-dj29x\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.118139 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/856a8ecd-1cf0-4150-9527-c457571785bd-operator-scripts\") pod \"nova-cell1-79a2-account-create-update-dj29x\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.141495 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg947\" (UniqueName: \"kubernetes.io/projected/856a8ecd-1cf0-4150-9527-c457571785bd-kube-api-access-qg947\") pod \"nova-cell1-79a2-account-create-update-dj29x\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.147496 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" podStartSLOduration=4.147472734 podStartE2EDuration="4.147472734s" podCreationTimestamp="2026-03-07 08:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:21.13543655 +0000 UTC m=+1458.044603025" watchObservedRunningTime="2026-03-07 08:13:21.147472734 +0000 UTC m=+1458.056639219" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.295446 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.329527 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.739017 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" path="/var/lib/kubelet/pods/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a/volumes" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.156207 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.231352 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-858bf88ddc-crlf2" event={"ID":"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec","Type":"ContainerStarted","Data":"1a3e914f44f63a290c3c91d6827bb52e7655710058f9eb6ae3b907be7b0c456a"} Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.289892 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.316289 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-858bf88ddc-crlf2" podStartSLOduration=5.316267513 podStartE2EDuration="5.316267513s" podCreationTimestamp="2026-03-07 08:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:22.289630891 +0000 UTC m=+1459.198797366" watchObservedRunningTime="2026-03-07 08:13:22.316267513 +0000 UTC m=+1459.225433988" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.759105 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8dct"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.971156 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.971494 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-central-agent" containerID="cri-o://f6773242ad8f5ad66928a7bfbd4218035821add88a1d594d2b7025e1d24427f0" gracePeriod=30 Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.972073 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="proxy-httpd" containerID="cri-o://213d483fa167f3e6c93de90e4309ca1493c59717e0a9530f798884a133193c58" gracePeriod=30 Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.972194 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-notification-agent" containerID="cri-o://ba99a8539887bd00737654b83bfe8ca6e1811fbef44c02ee311e49fb9b5a8c3d" gracePeriod=30 Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.972253 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="sg-core" containerID="cri-o://c325c77b42db574db4c21df4095eb6524c92395bfdd12bb74dedec271e75adc9" gracePeriod=30 Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.981141 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.217:3000/\": read tcp 10.217.0.2:47920->10.217.0.217:3000: read: connection reset by peer" Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.288065 4761 generic.go:334] "Generic (PLEG): container finished" podID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerID="213d483fa167f3e6c93de90e4309ca1493c59717e0a9530f798884a133193c58" exitCode=0 Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.288511 4761 generic.go:334] "Generic (PLEG): container finished" podID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerID="c325c77b42db574db4c21df4095eb6524c92395bfdd12bb74dedec271e75adc9" exitCode=2 Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.288160 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerDied","Data":"213d483fa167f3e6c93de90e4309ca1493c59717e0a9530f798884a133193c58"} Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.288610 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerDied","Data":"c325c77b42db574db4c21df4095eb6524c92395bfdd12bb74dedec271e75adc9"} Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.289995 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.290038 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.428313 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-79a2-account-create-update-dj29x"] Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.438142 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pw6jj"] Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.481133 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-172f-account-create-update-cmtmp"] Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.499531 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8dtv6"] Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.517152 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9vzc2"] Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.531054 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-69bc-account-create-update-jxq5h"] Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.651777 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-px52h" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:23 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:23 crc kubenswrapper[4761]: > Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.736117 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 07 08:13:23 crc kubenswrapper[4761]: W0307 08:13:23.966037 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f77b840_931c_4b69_a2e4_23c7bf19f14e.slice/crio-9e1b6ab8bf2aa8773bee8906890aa3523e400dd8d1dbac5842acb8a47475e77b WatchSource:0}: Error finding container 9e1b6ab8bf2aa8773bee8906890aa3523e400dd8d1dbac5842acb8a47475e77b: Status 404 returned error can't find the container with id 9e1b6ab8bf2aa8773bee8906890aa3523e400dd8d1dbac5842acb8a47475e77b Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.369504 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pw6jj" event={"ID":"2142964f-61fc-4ae0-af75-f6a72e968294","Type":"ContainerStarted","Data":"c270396f0e856f6eb35ab047718c470f3721665a5ef34b12393d327feaf37cec"} Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.395236 4761 generic.go:334] "Generic (PLEG): container finished" podID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerID="ba99a8539887bd00737654b83bfe8ca6e1811fbef44c02ee311e49fb9b5a8c3d" exitCode=0 Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.395268 4761 generic.go:334] "Generic (PLEG): container finished" podID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerID="f6773242ad8f5ad66928a7bfbd4218035821add88a1d594d2b7025e1d24427f0" exitCode=0 Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.395320 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerDied","Data":"ba99a8539887bd00737654b83bfe8ca6e1811fbef44c02ee311e49fb9b5a8c3d"} Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.395346 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerDied","Data":"f6773242ad8f5ad66928a7bfbd4218035821add88a1d594d2b7025e1d24427f0"} Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.401419 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-69bc-account-create-update-jxq5h" event={"ID":"803bf161-8aed-4d86-bb34-7664bfa5a21d","Type":"ContainerStarted","Data":"dface9d72fd55aee49ba3b1b6e3de6e8169cc80b515bba79bbe1342a378acd4b"} Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.409652 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9vzc2" event={"ID":"2eaf7dcd-b827-450a-8ac6-9953588f7697","Type":"ContainerStarted","Data":"a8ab4a95c0e4ad7153d8253bfc7c03a3ba9db41cae2453a892658e50d6011eb0"} Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.424247 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-172f-account-create-update-cmtmp" event={"ID":"9f77b840-931c-4b69-a2e4-23c7bf19f14e","Type":"ContainerStarted","Data":"9e1b6ab8bf2aa8773bee8906890aa3523e400dd8d1dbac5842acb8a47475e77b"} Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.426259 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8dtv6" event={"ID":"9a467587-eec2-4610-af1d-e666203cdddb","Type":"ContainerStarted","Data":"a3bb7a9065043cb0ed93d4d97624d49e5a012912407f940b60aaf05050eb1aa9"} Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.438164 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z8dct" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="registry-server" containerID="cri-o://37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd" gracePeriod=2 Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.438289 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" event={"ID":"856a8ecd-1cf0-4150-9527-c457571785bd","Type":"ContainerStarted","Data":"841297497b75d80ea71a4d51ad65fecab0e887e46bb3142902673ecb44c7101a"} Mar 07 08:13:25 crc kubenswrapper[4761]: I0307 08:13:25.458973 4761 generic.go:334] "Generic (PLEG): container finished" podID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerID="37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd" exitCode=0 Mar 07 08:13:25 crc kubenswrapper[4761]: I0307 08:13:25.459278 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8dct" event={"ID":"3ad49ed9-8c84-4de1-830c-679262fc906d","Type":"ContainerDied","Data":"37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd"} Mar 07 08:13:25 crc kubenswrapper[4761]: I0307 08:13:25.735921 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:25 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:25 crc kubenswrapper[4761]: > Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.070926 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-676c57c97f-mmh72"] Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.072559 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.093101 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-676c57c97f-mmh72"] Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.152587 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-combined-ca-bundle\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.152633 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data-custom\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.152663 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.152687 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-994vx\" (UniqueName: \"kubernetes.io/projected/1a968322-70c2-43b9-9842-7827fab7aa99-kube-api-access-994vx\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.175677 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6f6989b97c-mlg9v"] Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.177411 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.204064 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f6989b97c-mlg9v"] Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.225781 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5ddf795488-wndb8"] Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.227363 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255094 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data-custom\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255139 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgwqc\" (UniqueName: \"kubernetes.io/projected/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-kube-api-access-qgwqc\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255191 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255215 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-combined-ca-bundle\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255232 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255261 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data-custom\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255286 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p56cr\" (UniqueName: \"kubernetes.io/projected/f692c15c-b560-4796-97b4-e522c6527322-kube-api-access-p56cr\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255313 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255340 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-994vx\" (UniqueName: \"kubernetes.io/projected/1a968322-70c2-43b9-9842-7827fab7aa99-kube-api-access-994vx\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255387 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-combined-ca-bundle\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255542 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-combined-ca-bundle\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255576 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data-custom\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.261159 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5ddf795488-wndb8"] Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.281796 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-combined-ca-bundle\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.285956 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.297804 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-994vx\" (UniqueName: \"kubernetes.io/projected/1a968322-70c2-43b9-9842-7827fab7aa99-kube-api-access-994vx\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.305743 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data-custom\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.356966 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-combined-ca-bundle\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.357028 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data-custom\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.357075 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data-custom\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.357106 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgwqc\" (UniqueName: \"kubernetes.io/projected/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-kube-api-access-qgwqc\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.357149 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.357176 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.357207 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p56cr\" (UniqueName: \"kubernetes.io/projected/f692c15c-b560-4796-97b4-e522c6527322-kube-api-access-p56cr\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.357271 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-combined-ca-bundle\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.362071 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-combined-ca-bundle\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.363531 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.363822 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data-custom\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.364599 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-combined-ca-bundle\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.365524 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.367084 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data-custom\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.376016 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p56cr\" (UniqueName: \"kubernetes.io/projected/f692c15c-b560-4796-97b4-e522c6527322-kube-api-access-p56cr\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.377068 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgwqc\" (UniqueName: \"kubernetes.io/projected/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-kube-api-access-qgwqc\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.427207 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.531030 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.546372 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:27 crc kubenswrapper[4761]: I0307 08:13:27.911846 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:27 crc kubenswrapper[4761]: I0307 08:13:27.975489 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cxtbf"] Mar 07 08:13:27 crc kubenswrapper[4761]: I0307 08:13:27.975775 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" podUID="47de323f-ec4f-408e-ab84-7795676044fe" containerName="dnsmasq-dns" containerID="cri-o://2f178f2514e878b04f9c28ff9d6c8b7cb650cbc72f8af917f0ccc2484220920b" gracePeriod=10 Mar 07 08:13:28 crc kubenswrapper[4761]: I0307 08:13:28.382665 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:28 crc kubenswrapper[4761]: I0307 08:13:28.436013 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:28 crc kubenswrapper[4761]: I0307 08:13:28.510486 4761 generic.go:334] "Generic (PLEG): container finished" podID="47de323f-ec4f-408e-ab84-7795676044fe" containerID="2f178f2514e878b04f9c28ff9d6c8b7cb650cbc72f8af917f0ccc2484220920b" exitCode=0 Mar 07 08:13:28 crc kubenswrapper[4761]: I0307 08:13:28.511572 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" event={"ID":"47de323f-ec4f-408e-ab84-7795676044fe","Type":"ContainerDied","Data":"2f178f2514e878b04f9c28ff9d6c8b7cb650cbc72f8af917f0ccc2484220920b"} Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.267929 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f94956c9f-xbq22"] Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.282844 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7f7585cb88-jshvv"] Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.293462 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-b8f8c888f-mxmzb"] Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.295773 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.300004 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.301186 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.307843 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-b8f8c888f-mxmzb"] Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.365310 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-759cd75854-8ppd6"] Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.367278 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.374324 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.374983 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471117 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-public-tls-certs\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471223 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wvxq\" (UniqueName: \"kubernetes.io/projected/35163093-c6c8-4422-b9cc-e12645187165-kube-api-access-2wvxq\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471351 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471452 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471504 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-combined-ca-bundle\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471573 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data-custom\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471595 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-public-tls-certs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471627 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data-custom\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471692 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-combined-ca-bundle\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471756 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-internal-tls-certs\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471793 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds5bs\" (UniqueName: \"kubernetes.io/projected/4b63b266-eb88-4bce-bb76-76dff72e1e72-kube-api-access-ds5bs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471815 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-internal-tls-certs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.479014 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-759cd75854-8ppd6"] Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573631 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-public-tls-certs\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573694 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wvxq\" (UniqueName: \"kubernetes.io/projected/35163093-c6c8-4422-b9cc-e12645187165-kube-api-access-2wvxq\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573761 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573804 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573831 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-combined-ca-bundle\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573865 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data-custom\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573886 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-public-tls-certs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573915 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data-custom\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573958 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-combined-ca-bundle\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573990 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-internal-tls-certs\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.574018 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds5bs\" (UniqueName: \"kubernetes.io/projected/4b63b266-eb88-4bce-bb76-76dff72e1e72-kube-api-access-ds5bs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.574043 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-internal-tls-certs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.585737 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-public-tls-certs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.586632 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.587648 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-public-tls-certs\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.588602 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-internal-tls-certs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.589703 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-combined-ca-bundle\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.590706 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data-custom\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.593565 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data-custom\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.594238 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-internal-tls-certs\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.598151 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds5bs\" (UniqueName: \"kubernetes.io/projected/4b63b266-eb88-4bce-bb76-76dff72e1e72-kube-api-access-ds5bs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.599530 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-combined-ca-bundle\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.599693 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wvxq\" (UniqueName: \"kubernetes.io/projected/35163093-c6c8-4422-b9cc-e12645187165-kube-api-access-2wvxq\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.619042 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.676669 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.732506 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:30 crc kubenswrapper[4761]: I0307 08:13:30.590510 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" podUID="47de323f-ec4f-408e-ab84-7795676044fe" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.213:5353: connect: connection refused" Mar 07 08:13:32 crc kubenswrapper[4761]: E0307 08:13:32.052002 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd is running failed: container process not found" containerID="37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 08:13:32 crc kubenswrapper[4761]: E0307 08:13:32.052894 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd is running failed: container process not found" containerID="37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 08:13:32 crc kubenswrapper[4761]: E0307 08:13:32.053092 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd is running failed: container process not found" containerID="37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 08:13:32 crc kubenswrapper[4761]: E0307 08:13:32.053118 4761 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-z8dct" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="registry-server" Mar 07 08:13:32 crc kubenswrapper[4761]: I0307 08:13:32.644501 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:13:32 crc kubenswrapper[4761]: I0307 08:13:32.697984 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:13:32 crc kubenswrapper[4761]: I0307 08:13:32.895858 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-px52h"] Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.362375 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.372604 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-utilities\") pod \"3ad49ed9-8c84-4de1-830c-679262fc906d\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.372746 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-catalog-content\") pod \"3ad49ed9-8c84-4de1-830c-679262fc906d\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.372795 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc6st\" (UniqueName: \"kubernetes.io/projected/3ad49ed9-8c84-4de1-830c-679262fc906d-kube-api-access-kc6st\") pod \"3ad49ed9-8c84-4de1-830c-679262fc906d\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.373408 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-utilities" (OuterVolumeSpecName: "utilities") pod "3ad49ed9-8c84-4de1-830c-679262fc906d" (UID: "3ad49ed9-8c84-4de1-830c-679262fc906d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.373982 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.377386 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad49ed9-8c84-4de1-830c-679262fc906d-kube-api-access-kc6st" (OuterVolumeSpecName: "kube-api-access-kc6st") pod "3ad49ed9-8c84-4de1-830c-679262fc906d" (UID: "3ad49ed9-8c84-4de1-830c-679262fc906d"). InnerVolumeSpecName "kube-api-access-kc6st". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.406432 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.471566 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ad49ed9-8c84-4de1-830c-679262fc906d" (UID: "3ad49ed9-8c84-4de1-830c-679262fc906d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.476999 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.477060 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc6st\" (UniqueName: \"kubernetes.io/projected/3ad49ed9-8c84-4de1-830c-679262fc906d-kube-api-access-kc6st\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.506258 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.578707 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-config-data\") pod \"d7481eb8-b067-41f0-9347-7665f72b5d6a\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.578931 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-log-httpd\") pod \"d7481eb8-b067-41f0-9347-7665f72b5d6a\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.578983 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-run-httpd\") pod \"d7481eb8-b067-41f0-9347-7665f72b5d6a\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.579086 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-combined-ca-bundle\") pod \"d7481eb8-b067-41f0-9347-7665f72b5d6a\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.579271 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-sg-core-conf-yaml\") pod \"d7481eb8-b067-41f0-9347-7665f72b5d6a\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.579312 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-scripts\") pod \"d7481eb8-b067-41f0-9347-7665f72b5d6a\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.579421 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd7rk\" (UniqueName: \"kubernetes.io/projected/d7481eb8-b067-41f0-9347-7665f72b5d6a-kube-api-access-jd7rk\") pod \"d7481eb8-b067-41f0-9347-7665f72b5d6a\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.582498 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d7481eb8-b067-41f0-9347-7665f72b5d6a" (UID: "d7481eb8-b067-41f0-9347-7665f72b5d6a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.584600 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d7481eb8-b067-41f0-9347-7665f72b5d6a" (UID: "d7481eb8-b067-41f0-9347-7665f72b5d6a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.616807 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-scripts" (OuterVolumeSpecName: "scripts") pod "d7481eb8-b067-41f0-9347-7665f72b5d6a" (UID: "d7481eb8-b067-41f0-9347-7665f72b5d6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.631499 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7481eb8-b067-41f0-9347-7665f72b5d6a-kube-api-access-jd7rk" (OuterVolumeSpecName: "kube-api-access-jd7rk") pod "d7481eb8-b067-41f0-9347-7665f72b5d6a" (UID: "d7481eb8-b067-41f0-9347-7665f72b5d6a"). InnerVolumeSpecName "kube-api-access-jd7rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.651272 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.651591 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerDied","Data":"1e7c1a880355ecefe14e2eb240097ae879ac23e20a08685bf21fe65599254a91"} Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.651657 4761 scope.go:117] "RemoveContainer" containerID="213d483fa167f3e6c93de90e4309ca1493c59717e0a9530f798884a133193c58" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.657483 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8dct" event={"ID":"3ad49ed9-8c84-4de1-830c-679262fc906d","Type":"ContainerDied","Data":"090a72409729fe7daeed2197536fbaddaa5293f4efc5c41aa0af78a61f93da8c"} Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.657559 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.663206 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" event={"ID":"47de323f-ec4f-408e-ab84-7795676044fe","Type":"ContainerDied","Data":"69a69b2cb8492a4adf3759da20e907916ffd475dadc126a6233b6ca253538ef7"} Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.663328 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.681551 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-config\") pod \"47de323f-ec4f-408e-ab84-7795676044fe\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.681818 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-svc\") pod \"47de323f-ec4f-408e-ab84-7795676044fe\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.681881 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pl2s\" (UniqueName: \"kubernetes.io/projected/47de323f-ec4f-408e-ab84-7795676044fe-kube-api-access-4pl2s\") pod \"47de323f-ec4f-408e-ab84-7795676044fe\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.681922 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-swift-storage-0\") pod \"47de323f-ec4f-408e-ab84-7795676044fe\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.681971 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-sb\") pod \"47de323f-ec4f-408e-ab84-7795676044fe\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.682022 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-nb\") pod \"47de323f-ec4f-408e-ab84-7795676044fe\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.682537 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd7rk\" (UniqueName: \"kubernetes.io/projected/d7481eb8-b067-41f0-9347-7665f72b5d6a-kube-api-access-jd7rk\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.682557 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.682570 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.682581 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.708016 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47de323f-ec4f-408e-ab84-7795676044fe-kube-api-access-4pl2s" (OuterVolumeSpecName: "kube-api-access-4pl2s") pod "47de323f-ec4f-408e-ab84-7795676044fe" (UID: "47de323f-ec4f-408e-ab84-7795676044fe"). InnerVolumeSpecName "kube-api-access-4pl2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.785980 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pl2s\" (UniqueName: \"kubernetes.io/projected/47de323f-ec4f-408e-ab84-7795676044fe-kube-api-access-4pl2s\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.800408 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8dct"] Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.802326 4761 scope.go:117] "RemoveContainer" containerID="c325c77b42db574db4c21df4095eb6524c92395bfdd12bb74dedec271e75adc9" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.814930 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z8dct"] Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.884822 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-b8f8c888f-mxmzb"] Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.939311 4761 scope.go:117] "RemoveContainer" containerID="ba99a8539887bd00737654b83bfe8ca6e1811fbef44c02ee311e49fb9b5a8c3d" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.335943 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d7481eb8-b067-41f0-9347-7665f72b5d6a" (UID: "d7481eb8-b067-41f0-9347-7665f72b5d6a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.362426 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-676c57c97f-mmh72"] Mar 07 08:13:34 crc kubenswrapper[4761]: W0307 08:13:34.379746 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a968322_70c2_43b9_9842_7827fab7aa99.slice/crio-00ef68fbd07a8b813907cee43a3091207e90aeb988a48b6c328838e9d4ad0ea5 WatchSource:0}: Error finding container 00ef68fbd07a8b813907cee43a3091207e90aeb988a48b6c328838e9d4ad0ea5: Status 404 returned error can't find the container with id 00ef68fbd07a8b813907cee43a3091207e90aeb988a48b6c328838e9d4ad0ea5 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.384959 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f6989b97c-mlg9v"] Mar 07 08:13:34 crc kubenswrapper[4761]: W0307 08:13:34.394968 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf692c15c_b560_4796_97b4_e522c6527322.slice/crio-a1b37c09efba40756556d359429c1fe4ea0713e12c8be1a8f692859a757c862e WatchSource:0}: Error finding container a1b37c09efba40756556d359429c1fe4ea0713e12c8be1a8f692859a757c862e: Status 404 returned error can't find the container with id a1b37c09efba40756556d359429c1fe4ea0713e12c8be1a8f692859a757c862e Mar 07 08:13:34 crc kubenswrapper[4761]: W0307 08:13:34.399491 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb40d04ab_9269_46e2_b17a_b6f2f8fddb78.slice/crio-3c6a6b40b1679e1b159db93e088ad66d3e754f4102a5f5af9feb1da94e1af613 WatchSource:0}: Error finding container 3c6a6b40b1679e1b159db93e088ad66d3e754f4102a5f5af9feb1da94e1af613: Status 404 returned error can't find the container with id 3c6a6b40b1679e1b159db93e088ad66d3e754f4102a5f5af9feb1da94e1af613 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.403367 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5ddf795488-wndb8"] Mar 07 08:13:34 crc kubenswrapper[4761]: W0307 08:13:34.414025 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b63b266_eb88_4bce_bb76_76dff72e1e72.slice/crio-39064596057c52df8c571d8d99e9d09153c64bc5512fbc024127e78e3122a00c WatchSource:0}: Error finding container 39064596057c52df8c571d8d99e9d09153c64bc5512fbc024127e78e3122a00c: Status 404 returned error can't find the container with id 39064596057c52df8c571d8d99e9d09153c64bc5512fbc024127e78e3122a00c Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.415399 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-759cd75854-8ppd6"] Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.415829 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.536310 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47de323f-ec4f-408e-ab84-7795676044fe" (UID: "47de323f-ec4f-408e-ab84-7795676044fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.552231 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47de323f-ec4f-408e-ab84-7795676044fe" (UID: "47de323f-ec4f-408e-ab84-7795676044fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.567222 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47de323f-ec4f-408e-ab84-7795676044fe" (UID: "47de323f-ec4f-408e-ab84-7795676044fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.597117 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-config" (OuterVolumeSpecName: "config") pod "47de323f-ec4f-408e-ab84-7795676044fe" (UID: "47de323f-ec4f-408e-ab84-7795676044fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.597268 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "47de323f-ec4f-408e-ab84-7795676044fe" (UID: "47de323f-ec4f-408e-ab84-7795676044fe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.605932 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7481eb8-b067-41f0-9347-7665f72b5d6a" (UID: "d7481eb8-b067-41f0-9347-7665f72b5d6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.624891 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-config-data" (OuterVolumeSpecName: "config-data") pod "d7481eb8-b067-41f0-9347-7665f72b5d6a" (UID: "d7481eb8-b067-41f0-9347-7665f72b5d6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.625128 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.625155 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.625170 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.625182 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.625194 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.625206 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.625219 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.677289 4761 generic.go:334] "Generic (PLEG): container finished" podID="803bf161-8aed-4d86-bb34-7664bfa5a21d" containerID="7e5c076375addd1c3b05b3e3c6c2449ad7b80520631cb308c2a677abe8bce2d0" exitCode=0 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.677561 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-69bc-account-create-update-jxq5h" event={"ID":"803bf161-8aed-4d86-bb34-7664bfa5a21d","Type":"ContainerDied","Data":"7e5c076375addd1c3b05b3e3c6c2449ad7b80520631cb308c2a677abe8bce2d0"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.679649 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-759cd75854-8ppd6" event={"ID":"4b63b266-eb88-4bce-bb76-76dff72e1e72","Type":"ContainerStarted","Data":"39064596057c52df8c571d8d99e9d09153c64bc5512fbc024127e78e3122a00c"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.685795 4761 generic.go:334] "Generic (PLEG): container finished" podID="2eaf7dcd-b827-450a-8ac6-9953588f7697" containerID="8bd1714162f5fffdc0f00791d72262d374eef35faf0b19a884566f7b4045c8a0" exitCode=0 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.685889 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9vzc2" event={"ID":"2eaf7dcd-b827-450a-8ac6-9953588f7697","Type":"ContainerDied","Data":"8bd1714162f5fffdc0f00791d72262d374eef35faf0b19a884566f7b4045c8a0"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.689366 4761 generic.go:334] "Generic (PLEG): container finished" podID="9f77b840-931c-4b69-a2e4-23c7bf19f14e" containerID="4278c6d7e37afe8132d9584f5a1a8ff6192cc21ad46705e83ef3316d86918aff" exitCode=0 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.689430 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-172f-account-create-update-cmtmp" event={"ID":"9f77b840-931c-4b69-a2e4-23c7bf19f14e","Type":"ContainerDied","Data":"4278c6d7e37afe8132d9584f5a1a8ff6192cc21ad46705e83ef3316d86918aff"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.694203 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5ddf795488-wndb8" event={"ID":"b40d04ab-9269-46e2-b17a-b6f2f8fddb78","Type":"ContainerStarted","Data":"3c6a6b40b1679e1b159db93e088ad66d3e754f4102a5f5af9feb1da94e1af613"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.695515 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b8f8c888f-mxmzb" event={"ID":"35163093-c6c8-4422-b9cc-e12645187165","Type":"ContainerStarted","Data":"5d5aca546b08059075eb76b1f3ba8fe7d4bacc17011c3287975fcb34af813e4a"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.697045 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f94956c9f-xbq22" event={"ID":"19b5d822-117e-4890-9ef2-6e75fc9a5c98","Type":"ContainerStarted","Data":"d4664c58f260536a81211c969a35f89ac9977c97d2b99db0a4bb205c039801d8"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.697180 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6f94956c9f-xbq22" podUID="19b5d822-117e-4890-9ef2-6e75fc9a5c98" containerName="heat-api" containerID="cri-o://d4664c58f260536a81211c969a35f89ac9977c97d2b99db0a4bb205c039801d8" gracePeriod=60 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.697271 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.711249 4761 generic.go:334] "Generic (PLEG): container finished" podID="2142964f-61fc-4ae0-af75-f6a72e968294" containerID="d833b981b4691270dca8f538b2b902fc383572783c4ddf6451d1d99578a88b14" exitCode=0 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.711343 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pw6jj" event={"ID":"2142964f-61fc-4ae0-af75-f6a72e968294","Type":"ContainerDied","Data":"d833b981b4691270dca8f538b2b902fc383572783c4ddf6451d1d99578a88b14"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.714080 4761 generic.go:334] "Generic (PLEG): container finished" podID="9a467587-eec2-4610-af1d-e666203cdddb" containerID="149b48cf85012d70b4ae66bce7176663f91468b88970a035d6273065ef6b64fd" exitCode=0 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.714168 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8dtv6" event={"ID":"9a467587-eec2-4610-af1d-e666203cdddb","Type":"ContainerDied","Data":"149b48cf85012d70b4ae66bce7176663f91468b88970a035d6273065ef6b64fd"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.717917 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" event={"ID":"856a8ecd-1cf0-4150-9527-c457571785bd","Type":"ContainerStarted","Data":"f5c225d3c383fc2428ebdbaef59f7c19afff3acb77d8d8c8541b440f91e5c607"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.744167 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6f94956c9f-xbq22" podStartSLOduration=3.923568392 podStartE2EDuration="17.744140771s" podCreationTimestamp="2026-03-07 08:13:17 +0000 UTC" firstStartedPulling="2026-03-07 08:13:19.280005179 +0000 UTC m=+1456.189171664" lastFinishedPulling="2026-03-07 08:13:33.100577578 +0000 UTC m=+1470.009744043" observedRunningTime="2026-03-07 08:13:34.738284234 +0000 UTC m=+1471.647450709" watchObservedRunningTime="2026-03-07 08:13:34.744140771 +0000 UTC m=+1471.653307246" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.746839 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" event={"ID":"f692c15c-b560-4796-97b4-e522c6527322","Type":"ContainerStarted","Data":"a1b37c09efba40756556d359429c1fe4ea0713e12c8be1a8f692859a757c862e"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.750611 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-px52h" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="registry-server" containerID="cri-o://2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411" gracePeriod=2 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.750708 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-676c57c97f-mmh72" event={"ID":"1a968322-70c2-43b9-9842-7827fab7aa99","Type":"ContainerStarted","Data":"00ef68fbd07a8b813907cee43a3091207e90aeb988a48b6c328838e9d4ad0ea5"} Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.250638 4761 scope.go:117] "RemoveContainer" containerID="f6773242ad8f5ad66928a7bfbd4218035821add88a1d594d2b7025e1d24427f0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.364899 4761 scope.go:117] "RemoveContainer" containerID="37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.542625 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.551689 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-catalog-content\") pod \"321917f1-f061-4e00-a598-2766772d2290\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.552047 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-utilities\") pod \"321917f1-f061-4e00-a598-2766772d2290\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.552161 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n42w2\" (UniqueName: \"kubernetes.io/projected/321917f1-f061-4e00-a598-2766772d2290-kube-api-access-n42w2\") pod \"321917f1-f061-4e00-a598-2766772d2290\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.552387 4761 scope.go:117] "RemoveContainer" containerID="838d2403b600902e213c3a5f93612e34608bca197d59c3727a1ff2eeb0d7feb7" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.552474 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-utilities" (OuterVolumeSpecName: "utilities") pod "321917f1-f061-4e00-a598-2766772d2290" (UID: "321917f1-f061-4e00-a598-2766772d2290"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.552875 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.557078 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/321917f1-f061-4e00-a598-2766772d2290-kube-api-access-n42w2" (OuterVolumeSpecName: "kube-api-access-n42w2") pod "321917f1-f061-4e00-a598-2766772d2290" (UID: "321917f1-f061-4e00-a598-2766772d2290"). InnerVolumeSpecName "kube-api-access-n42w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.608514 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "321917f1-f061-4e00-a598-2766772d2290" (UID: "321917f1-f061-4e00-a598-2766772d2290"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.655394 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.655426 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n42w2\" (UniqueName: \"kubernetes.io/projected/321917f1-f061-4e00-a598-2766772d2290-kube-api-access-n42w2\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.701569 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:35 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:35 crc kubenswrapper[4761]: > Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.721866 4761 scope.go:117] "RemoveContainer" containerID="82295a8be5f6343bca3c9c0785b56f687bd5b59561b60a8b69c2f6c1d2003d94" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.763061 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" path="/var/lib/kubelet/pods/3ad49ed9-8c84-4de1-830c-679262fc906d/volumes" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.764527 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.764561 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.767574 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768199 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="extract-utilities" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768224 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="extract-utilities" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768240 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="registry-server" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768249 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="registry-server" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768271 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="registry-server" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768280 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="registry-server" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768298 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="sg-core" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768305 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="sg-core" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768325 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="extract-content" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768333 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="extract-content" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768343 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="proxy-httpd" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768351 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="proxy-httpd" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768369 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47de323f-ec4f-408e-ab84-7795676044fe" containerName="init" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768377 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="47de323f-ec4f-408e-ab84-7795676044fe" containerName="init" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768403 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-central-agent" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768411 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-central-agent" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768423 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47de323f-ec4f-408e-ab84-7795676044fe" containerName="dnsmasq-dns" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768431 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="47de323f-ec4f-408e-ab84-7795676044fe" containerName="dnsmasq-dns" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768449 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-notification-agent" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768457 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-notification-agent" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768468 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="extract-content" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768476 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="extract-content" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768492 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="extract-utilities" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768499 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="extract-utilities" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768794 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-central-agent" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768811 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="sg-core" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768836 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="registry-server" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768850 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="proxy-httpd" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768861 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="47de323f-ec4f-408e-ab84-7795676044fe" containerName="dnsmasq-dns" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768876 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-notification-agent" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768888 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="registry-server" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.771667 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.774519 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.774734 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.779336 4761 generic.go:334] "Generic (PLEG): container finished" podID="856a8ecd-1cf0-4150-9527-c457571785bd" containerID="f5c225d3c383fc2428ebdbaef59f7c19afff3acb77d8d8c8541b440f91e5c607" exitCode=0 Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.779409 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" event={"ID":"856a8ecd-1cf0-4150-9527-c457571785bd","Type":"ContainerDied","Data":"f5c225d3c383fc2428ebdbaef59f7c19afff3acb77d8d8c8541b440f91e5c607"} Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.820264 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cxtbf"] Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.835406 4761 generic.go:334] "Generic (PLEG): container finished" podID="321917f1-f061-4e00-a598-2766772d2290" containerID="2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411" exitCode=0 Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.835625 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.845117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px52h" event={"ID":"321917f1-f061-4e00-a598-2766772d2290","Type":"ContainerDied","Data":"2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411"} Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.845171 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px52h" event={"ID":"321917f1-f061-4e00-a598-2766772d2290","Type":"ContainerDied","Data":"2cbfd1b3af208babb0d08bf03360a9cd1efcb6c980322092c1b709cbeae0d45d"} Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.851508 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cxtbf"] Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.865747 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.878816 4761 scope.go:117] "RemoveContainer" containerID="2f178f2514e878b04f9c28ff9d6c8b7cb650cbc72f8af917f0ccc2484220920b" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.936847 4761 scope.go:117] "RemoveContainer" containerID="db76a4b10bb0a626ef23cd3081e3ec0c08ddcae40fa11f11d1e45f6d1d2e63e8" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.969923 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-scripts\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.970060 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-run-httpd\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.970159 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-log-httpd\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.970299 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.970381 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.970478 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4tbf\" (UniqueName: \"kubernetes.io/projected/94a423ba-64ee-463e-bc87-233d93782eb3-kube-api-access-f4tbf\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.970576 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-config-data\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.988014 4761 scope.go:117] "RemoveContainer" containerID="2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.021237 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-px52h"] Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.075052 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-scripts\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.075161 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-run-httpd\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.075226 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-log-httpd\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.075312 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.075370 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.075432 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4tbf\" (UniqueName: \"kubernetes.io/projected/94a423ba-64ee-463e-bc87-233d93782eb3-kube-api-access-f4tbf\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.075502 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-config-data\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.077083 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-log-httpd\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.077315 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-run-httpd\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.097879 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-px52h"] Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.202560 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-scripts\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.205808 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-config-data\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.215134 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.217730 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4tbf\" (UniqueName: \"kubernetes.io/projected/94a423ba-64ee-463e-bc87-233d93782eb3-kube-api-access-f4tbf\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.219163 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.224851 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.593077 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.623358 4761 scope.go:117] "RemoveContainer" containerID="0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.666076 4761 scope.go:117] "RemoveContainer" containerID="5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.708656 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/856a8ecd-1cf0-4150-9527-c457571785bd-operator-scripts\") pod \"856a8ecd-1cf0-4150-9527-c457571785bd\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.708847 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg947\" (UniqueName: \"kubernetes.io/projected/856a8ecd-1cf0-4150-9527-c457571785bd-kube-api-access-qg947\") pod \"856a8ecd-1cf0-4150-9527-c457571785bd\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.711310 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/856a8ecd-1cf0-4150-9527-c457571785bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "856a8ecd-1cf0-4150-9527-c457571785bd" (UID: "856a8ecd-1cf0-4150-9527-c457571785bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.722151 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856a8ecd-1cf0-4150-9527-c457571785bd-kube-api-access-qg947" (OuterVolumeSpecName: "kube-api-access-qg947") pod "856a8ecd-1cf0-4150-9527-c457571785bd" (UID: "856a8ecd-1cf0-4150-9527-c457571785bd"). InnerVolumeSpecName "kube-api-access-qg947". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.774882 4761 scope.go:117] "RemoveContainer" containerID="2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411" Mar 07 08:13:36 crc kubenswrapper[4761]: E0307 08:13:36.784327 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411\": container with ID starting with 2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411 not found: ID does not exist" containerID="2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.784371 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411"} err="failed to get container status \"2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411\": rpc error: code = NotFound desc = could not find container \"2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411\": container with ID starting with 2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411 not found: ID does not exist" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.784400 4761 scope.go:117] "RemoveContainer" containerID="0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87" Mar 07 08:13:36 crc kubenswrapper[4761]: E0307 08:13:36.798707 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87\": container with ID starting with 0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87 not found: ID does not exist" containerID="0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.798766 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87"} err="failed to get container status \"0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87\": rpc error: code = NotFound desc = could not find container \"0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87\": container with ID starting with 0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87 not found: ID does not exist" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.798793 4761 scope.go:117] "RemoveContainer" containerID="5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069" Mar 07 08:13:36 crc kubenswrapper[4761]: E0307 08:13:36.802809 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069\": container with ID starting with 5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069 not found: ID does not exist" containerID="5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.802843 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069"} err="failed to get container status \"5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069\": rpc error: code = NotFound desc = could not find container \"5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069\": container with ID starting with 5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069 not found: ID does not exist" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.812676 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg947\" (UniqueName: \"kubernetes.io/projected/856a8ecd-1cf0-4150-9527-c457571785bd-kube-api-access-qg947\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.812707 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/856a8ecd-1cf0-4150-9527-c457571785bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.860747 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.876259 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-759cd75854-8ppd6" event={"ID":"4b63b266-eb88-4bce-bb76-76dff72e1e72","Type":"ContainerStarted","Data":"52f0bb2496856fca4a0d012c5f9685733b249db4e1c09e4b737bfc2bc6bf9459"} Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.877562 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.917572 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5ddf795488-wndb8" event={"ID":"b40d04ab-9269-46e2-b17a-b6f2f8fddb78","Type":"ContainerStarted","Data":"17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880"} Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.918165 4761 scope.go:117] "RemoveContainer" containerID="17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.919894 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-759cd75854-8ppd6" podStartSLOduration=7.919876406 podStartE2EDuration="7.919876406s" podCreationTimestamp="2026-03-07 08:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:36.917591129 +0000 UTC m=+1473.826757604" watchObservedRunningTime="2026-03-07 08:13:36.919876406 +0000 UTC m=+1473.829042881" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.942541 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" event={"ID":"856a8ecd-1cf0-4150-9527-c457571785bd","Type":"ContainerDied","Data":"841297497b75d80ea71a4d51ad65fecab0e887e46bb3142902673ecb44c7101a"} Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.942589 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="841297497b75d80ea71a4d51ad65fecab0e887e46bb3142902673ecb44c7101a" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.942677 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.980085 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b8f8c888f-mxmzb" event={"ID":"35163093-c6c8-4422-b9cc-e12645187165","Type":"ContainerStarted","Data":"130936491ac0d66e8bc5863e526f0ce24165cc3492d527d7ec2236bfdce93f7a"} Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.980167 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.992755 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.015090 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pw6jj" event={"ID":"2142964f-61fc-4ae0-af75-f6a72e968294","Type":"ContainerDied","Data":"c270396f0e856f6eb35ab047718c470f3721665a5ef34b12393d327feaf37cec"} Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.015128 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c270396f0e856f6eb35ab047718c470f3721665a5ef34b12393d327feaf37cec" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.015197 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.019693 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2142964f-61fc-4ae0-af75-f6a72e968294-operator-scripts\") pod \"2142964f-61fc-4ae0-af75-f6a72e968294\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.019901 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxntb\" (UniqueName: \"kubernetes.io/projected/2142964f-61fc-4ae0-af75-f6a72e968294-kube-api-access-zxntb\") pod \"2142964f-61fc-4ae0-af75-f6a72e968294\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.044074 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-b8f8c888f-mxmzb" podStartSLOduration=8.0440494 podStartE2EDuration="8.0440494s" podCreationTimestamp="2026-03-07 08:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:37.016007483 +0000 UTC m=+1473.925173958" watchObservedRunningTime="2026-03-07 08:13:37.0440494 +0000 UTC m=+1473.953215875" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.065732 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2142964f-61fc-4ae0-af75-f6a72e968294-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2142964f-61fc-4ae0-af75-f6a72e968294" (UID: "2142964f-61fc-4ae0-af75-f6a72e968294"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.095987 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2142964f-61fc-4ae0-af75-f6a72e968294-kube-api-access-zxntb" (OuterVolumeSpecName: "kube-api-access-zxntb") pod "2142964f-61fc-4ae0-af75-f6a72e968294" (UID: "2142964f-61fc-4ae0-af75-f6a72e968294"). InnerVolumeSpecName "kube-api-access-zxntb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.096069 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" event={"ID":"17f15fe3-9df7-4bd6-8bca-d357f52e458d","Type":"ContainerStarted","Data":"e78cedba0361382470044aacfafd1307414c9299252e2a4466dd650032f6e402"} Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.096151 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.096149 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" podUID="17f15fe3-9df7-4bd6-8bca-d357f52e458d" containerName="heat-cfnapi" containerID="cri-o://e78cedba0361382470044aacfafd1307414c9299252e2a4466dd650032f6e402" gracePeriod=60 Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.103312 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" event={"ID":"f692c15c-b560-4796-97b4-e522c6527322","Type":"ContainerStarted","Data":"c8274a881c4103816987c9c53e06931386f5dc08a985eb1ba945781335340a9f"} Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.104009 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.121701 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-676c57c97f-mmh72" event={"ID":"1a968322-70c2-43b9-9842-7827fab7aa99","Type":"ContainerStarted","Data":"e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b"} Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.124405 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.150639 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xgs7\" (UniqueName: \"kubernetes.io/projected/803bf161-8aed-4d86-bb34-7664bfa5a21d-kube-api-access-8xgs7\") pod \"803bf161-8aed-4d86-bb34-7664bfa5a21d\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.150698 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/803bf161-8aed-4d86-bb34-7664bfa5a21d-operator-scripts\") pod \"803bf161-8aed-4d86-bb34-7664bfa5a21d\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.160359 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2142964f-61fc-4ae0-af75-f6a72e968294-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.160393 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxntb\" (UniqueName: \"kubernetes.io/projected/2142964f-61fc-4ae0-af75-f6a72e968294-kube-api-access-zxntb\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.163184 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/803bf161-8aed-4d86-bb34-7664bfa5a21d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "803bf161-8aed-4d86-bb34-7664bfa5a21d" (UID: "803bf161-8aed-4d86-bb34-7664bfa5a21d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.180007 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/803bf161-8aed-4d86-bb34-7664bfa5a21d-kube-api-access-8xgs7" (OuterVolumeSpecName: "kube-api-access-8xgs7") pod "803bf161-8aed-4d86-bb34-7664bfa5a21d" (UID: "803bf161-8aed-4d86-bb34-7664bfa5a21d"). InnerVolumeSpecName "kube-api-access-8xgs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.180039 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"212a33ff-09a0-4654-adff-687f8d9145a6","Type":"ContainerStarted","Data":"a4145ad51befc4cd27a859fba0e7e28e3d90ad3ad55a613de911f491a9e84b09"} Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.236478 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" podStartSLOduration=6.299682765 podStartE2EDuration="20.236455327s" podCreationTimestamp="2026-03-07 08:13:17 +0000 UTC" firstStartedPulling="2026-03-07 08:13:19.218074935 +0000 UTC m=+1456.127241410" lastFinishedPulling="2026-03-07 08:13:33.154847497 +0000 UTC m=+1470.064013972" observedRunningTime="2026-03-07 08:13:37.132467102 +0000 UTC m=+1474.041633577" watchObservedRunningTime="2026-03-07 08:13:37.236455327 +0000 UTC m=+1474.145621792" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.258706 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.263036 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xgs7\" (UniqueName: \"kubernetes.io/projected/803bf161-8aed-4d86-bb34-7664bfa5a21d-kube-api-access-8xgs7\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.263058 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/803bf161-8aed-4d86-bb34-7664bfa5a21d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.306796 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.313416 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.336459 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" podStartSLOduration=11.33643626 podStartE2EDuration="11.33643626s" podCreationTimestamp="2026-03-07 08:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:37.151114983 +0000 UTC m=+1474.060281458" watchObservedRunningTime="2026-03-07 08:13:37.33643626 +0000 UTC m=+1474.245602735" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.361359 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-676c57c97f-mmh72" podStartSLOduration=11.361334069 podStartE2EDuration="11.361334069s" podCreationTimestamp="2026-03-07 08:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:37.171461616 +0000 UTC m=+1474.080628091" watchObservedRunningTime="2026-03-07 08:13:37.361334069 +0000 UTC m=+1474.270500544" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.364795 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4597\" (UniqueName: \"kubernetes.io/projected/2eaf7dcd-b827-450a-8ac6-9953588f7697-kube-api-access-b4597\") pod \"2eaf7dcd-b827-450a-8ac6-9953588f7697\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.364898 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a467587-eec2-4610-af1d-e666203cdddb-operator-scripts\") pod \"9a467587-eec2-4610-af1d-e666203cdddb\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.364928 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9qbh\" (UniqueName: \"kubernetes.io/projected/9a467587-eec2-4610-af1d-e666203cdddb-kube-api-access-j9qbh\") pod \"9a467587-eec2-4610-af1d-e666203cdddb\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.365021 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eaf7dcd-b827-450a-8ac6-9953588f7697-operator-scripts\") pod \"2eaf7dcd-b827-450a-8ac6-9953588f7697\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.365082 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f77b840-931c-4b69-a2e4-23c7bf19f14e-operator-scripts\") pod \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.365100 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxwss\" (UniqueName: \"kubernetes.io/projected/9f77b840-931c-4b69-a2e4-23c7bf19f14e-kube-api-access-bxwss\") pod \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.366616 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a467587-eec2-4610-af1d-e666203cdddb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a467587-eec2-4610-af1d-e666203cdddb" (UID: "9a467587-eec2-4610-af1d-e666203cdddb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.367040 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eaf7dcd-b827-450a-8ac6-9953588f7697-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2eaf7dcd-b827-450a-8ac6-9953588f7697" (UID: "2eaf7dcd-b827-450a-8ac6-9953588f7697"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.367339 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f77b840-931c-4b69-a2e4-23c7bf19f14e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9f77b840-931c-4b69-a2e4-23c7bf19f14e" (UID: "9f77b840-931c-4b69-a2e4-23c7bf19f14e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.369647 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a467587-eec2-4610-af1d-e666203cdddb-kube-api-access-j9qbh" (OuterVolumeSpecName: "kube-api-access-j9qbh") pod "9a467587-eec2-4610-af1d-e666203cdddb" (UID: "9a467587-eec2-4610-af1d-e666203cdddb"). InnerVolumeSpecName "kube-api-access-j9qbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.370207 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f77b840-931c-4b69-a2e4-23c7bf19f14e-kube-api-access-bxwss" (OuterVolumeSpecName: "kube-api-access-bxwss") pod "9f77b840-931c-4b69-a2e4-23c7bf19f14e" (UID: "9f77b840-931c-4b69-a2e4-23c7bf19f14e"). InnerVolumeSpecName "kube-api-access-bxwss". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.371860 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eaf7dcd-b827-450a-8ac6-9953588f7697-kube-api-access-b4597" (OuterVolumeSpecName: "kube-api-access-b4597") pod "2eaf7dcd-b827-450a-8ac6-9953588f7697" (UID: "2eaf7dcd-b827-450a-8ac6-9953588f7697"). InnerVolumeSpecName "kube-api-access-b4597". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.381940 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=5.510514078 podStartE2EDuration="28.381920848s" podCreationTimestamp="2026-03-07 08:13:09 +0000 UTC" firstStartedPulling="2026-03-07 08:13:10.430878299 +0000 UTC m=+1447.340044774" lastFinishedPulling="2026-03-07 08:13:33.302285069 +0000 UTC m=+1470.211451544" observedRunningTime="2026-03-07 08:13:37.237854472 +0000 UTC m=+1474.147020947" watchObservedRunningTime="2026-03-07 08:13:37.381920848 +0000 UTC m=+1474.291087323" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.470072 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4597\" (UniqueName: \"kubernetes.io/projected/2eaf7dcd-b827-450a-8ac6-9953588f7697-kube-api-access-b4597\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.470259 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a467587-eec2-4610-af1d-e666203cdddb-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.470312 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9qbh\" (UniqueName: \"kubernetes.io/projected/9a467587-eec2-4610-af1d-e666203cdddb-kube-api-access-j9qbh\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.470359 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eaf7dcd-b827-450a-8ac6-9953588f7697-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.470408 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f77b840-931c-4b69-a2e4-23c7bf19f14e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.470476 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxwss\" (UniqueName: \"kubernetes.io/projected/9f77b840-931c-4b69-a2e4-23c7bf19f14e-kube-api-access-bxwss\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.504489 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.717147 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="321917f1-f061-4e00-a598-2766772d2290" path="/var/lib/kubelet/pods/321917f1-f061-4e00-a598-2766772d2290/volumes" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.717978 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47de323f-ec4f-408e-ab84-7795676044fe" path="/var/lib/kubelet/pods/47de323f-ec4f-408e-ab84-7795676044fe/volumes" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.718608 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" path="/var/lib/kubelet/pods/d7481eb8-b067-41f0-9347-7665f72b5d6a/volumes" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.102146 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.230758 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9vzc2" event={"ID":"2eaf7dcd-b827-450a-8ac6-9953588f7697","Type":"ContainerDied","Data":"a8ab4a95c0e4ad7153d8253bfc7c03a3ba9db41cae2453a892658e50d6011eb0"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.231023 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8ab4a95c0e4ad7153d8253bfc7c03a3ba9db41cae2453a892658e50d6011eb0" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.231189 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.235313 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-172f-account-create-update-cmtmp" event={"ID":"9f77b840-931c-4b69-a2e4-23c7bf19f14e","Type":"ContainerDied","Data":"9e1b6ab8bf2aa8773bee8906890aa3523e400dd8d1dbac5842acb8a47475e77b"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.235520 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e1b6ab8bf2aa8773bee8906890aa3523e400dd8d1dbac5842acb8a47475e77b" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.235325 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.239661 4761 generic.go:334] "Generic (PLEG): container finished" podID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerID="17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880" exitCode=1 Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.239694 4761 generic.go:334] "Generic (PLEG): container finished" podID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerID="7efe8bfb1109c93a29541ce84a28fd64919c403695a6690db25d7713e18b1d67" exitCode=1 Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.239966 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5ddf795488-wndb8" event={"ID":"b40d04ab-9269-46e2-b17a-b6f2f8fddb78","Type":"ContainerDied","Data":"17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.240169 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5ddf795488-wndb8" event={"ID":"b40d04ab-9269-46e2-b17a-b6f2f8fddb78","Type":"ContainerDied","Data":"7efe8bfb1109c93a29541ce84a28fd64919c403695a6690db25d7713e18b1d67"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.240232 4761 scope.go:117] "RemoveContainer" containerID="17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.240457 4761 scope.go:117] "RemoveContainer" containerID="7efe8bfb1109c93a29541ce84a28fd64919c403695a6690db25d7713e18b1d67" Mar 07 08:13:38 crc kubenswrapper[4761]: E0307 08:13:38.240883 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5ddf795488-wndb8_openstack(b40d04ab-9269-46e2-b17a-b6f2f8fddb78)\"" pod="openstack/heat-api-5ddf795488-wndb8" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.249378 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.249389 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8dtv6" event={"ID":"9a467587-eec2-4610-af1d-e666203cdddb","Type":"ContainerDied","Data":"a3bb7a9065043cb0ed93d4d97624d49e5a012912407f940b60aaf05050eb1aa9"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.249669 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3bb7a9065043cb0ed93d4d97624d49e5a012912407f940b60aaf05050eb1aa9" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.251258 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerStarted","Data":"faa12ada32ffbc2f463162b9872a776d586f459f95670a09363356876647efb3"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.252792 4761 generic.go:334] "Generic (PLEG): container finished" podID="17f15fe3-9df7-4bd6-8bca-d357f52e458d" containerID="e78cedba0361382470044aacfafd1307414c9299252e2a4466dd650032f6e402" exitCode=0 Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.252823 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" event={"ID":"17f15fe3-9df7-4bd6-8bca-d357f52e458d","Type":"ContainerDied","Data":"e78cedba0361382470044aacfafd1307414c9299252e2a4466dd650032f6e402"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.254211 4761 generic.go:334] "Generic (PLEG): container finished" podID="f692c15c-b560-4796-97b4-e522c6527322" containerID="c8274a881c4103816987c9c53e06931386f5dc08a985eb1ba945781335340a9f" exitCode=1 Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.254316 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" event={"ID":"f692c15c-b560-4796-97b4-e522c6527322","Type":"ContainerDied","Data":"c8274a881c4103816987c9c53e06931386f5dc08a985eb1ba945781335340a9f"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.254865 4761 scope.go:117] "RemoveContainer" containerID="c8274a881c4103816987c9c53e06931386f5dc08a985eb1ba945781335340a9f" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.258030 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.258478 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-69bc-account-create-update-jxq5h" event={"ID":"803bf161-8aed-4d86-bb34-7664bfa5a21d","Type":"ContainerDied","Data":"dface9d72fd55aee49ba3b1b6e3de6e8169cc80b515bba79bbe1342a378acd4b"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.258505 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dface9d72fd55aee49ba3b1b6e3de6e8169cc80b515bba79bbe1342a378acd4b" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.728084 4761 scope.go:117] "RemoveContainer" containerID="17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880" Mar 07 08:13:38 crc kubenswrapper[4761]: E0307 08:13:38.729152 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880\": container with ID starting with 17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880 not found: ID does not exist" containerID="17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.729194 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880"} err="failed to get container status \"17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880\": rpc error: code = NotFound desc = could not find container \"17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880\": container with ID starting with 17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880 not found: ID does not exist" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.250133 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.276119 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" event={"ID":"17f15fe3-9df7-4bd6-8bca-d357f52e458d","Type":"ContainerDied","Data":"ee20e1ad7fe019aab6b30fb6ddce84ad330e4fdb063fd7c00b7444e8795a600b"} Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.276175 4761 scope.go:117] "RemoveContainer" containerID="e78cedba0361382470044aacfafd1307414c9299252e2a4466dd650032f6e402" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.276313 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.282070 4761 generic.go:334] "Generic (PLEG): container finished" podID="f692c15c-b560-4796-97b4-e522c6527322" containerID="933ac196330b21829577a65f071256ec0e9325a7bc7b21e90aeb49a7a54f997f" exitCode=1 Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.282133 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" event={"ID":"f692c15c-b560-4796-97b4-e522c6527322","Type":"ContainerDied","Data":"933ac196330b21829577a65f071256ec0e9325a7bc7b21e90aeb49a7a54f997f"} Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.283020 4761 scope.go:117] "RemoveContainer" containerID="933ac196330b21829577a65f071256ec0e9325a7bc7b21e90aeb49a7a54f997f" Mar 07 08:13:39 crc kubenswrapper[4761]: E0307 08:13:39.283506 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6f6989b97c-mlg9v_openstack(f692c15c-b560-4796-97b4-e522c6527322)\"" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" podUID="f692c15c-b560-4796-97b4-e522c6527322" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.287553 4761 scope.go:117] "RemoveContainer" containerID="7efe8bfb1109c93a29541ce84a28fd64919c403695a6690db25d7713e18b1d67" Mar 07 08:13:39 crc kubenswrapper[4761]: E0307 08:13:39.287883 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5ddf795488-wndb8_openstack(b40d04ab-9269-46e2-b17a-b6f2f8fddb78)\"" pod="openstack/heat-api-5ddf795488-wndb8" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.292405 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerStarted","Data":"85dc95220a766c122a682753ac8f6be9951a34865916a2e670889cc3fee86054"} Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.306547 4761 scope.go:117] "RemoveContainer" containerID="c8274a881c4103816987c9c53e06931386f5dc08a985eb1ba945781335340a9f" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.326443 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-combined-ca-bundle\") pod \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.328379 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data-custom\") pod \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.329195 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data\") pod \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.329290 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rshhg\" (UniqueName: \"kubernetes.io/projected/17f15fe3-9df7-4bd6-8bca-d357f52e458d-kube-api-access-rshhg\") pod \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.423190 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "17f15fe3-9df7-4bd6-8bca-d357f52e458d" (UID: "17f15fe3-9df7-4bd6-8bca-d357f52e458d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.423823 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f15fe3-9df7-4bd6-8bca-d357f52e458d-kube-api-access-rshhg" (OuterVolumeSpecName: "kube-api-access-rshhg") pod "17f15fe3-9df7-4bd6-8bca-d357f52e458d" (UID: "17f15fe3-9df7-4bd6-8bca-d357f52e458d"). InnerVolumeSpecName "kube-api-access-rshhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.430385 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17f15fe3-9df7-4bd6-8bca-d357f52e458d" (UID: "17f15fe3-9df7-4bd6-8bca-d357f52e458d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.443600 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.443632 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.443644 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rshhg\" (UniqueName: \"kubernetes.io/projected/17f15fe3-9df7-4bd6-8bca-d357f52e458d-kube-api-access-rshhg\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.482764 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data" (OuterVolumeSpecName: "config-data") pod "17f15fe3-9df7-4bd6-8bca-d357f52e458d" (UID: "17f15fe3-9df7-4bd6-8bca-d357f52e458d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.545469 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.611091 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7f7585cb88-jshvv"] Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.625239 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7f7585cb88-jshvv"] Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.719325 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f15fe3-9df7-4bd6-8bca-d357f52e458d" path="/var/lib/kubelet/pods/17f15fe3-9df7-4bd6-8bca-d357f52e458d/volumes" Mar 07 08:13:40 crc kubenswrapper[4761]: I0307 08:13:40.319889 4761 scope.go:117] "RemoveContainer" containerID="933ac196330b21829577a65f071256ec0e9325a7bc7b21e90aeb49a7a54f997f" Mar 07 08:13:40 crc kubenswrapper[4761]: E0307 08:13:40.320400 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6f6989b97c-mlg9v_openstack(f692c15c-b560-4796-97b4-e522c6527322)\"" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" podUID="f692c15c-b560-4796-97b4-e522c6527322" Mar 07 08:13:40 crc kubenswrapper[4761]: I0307 08:13:40.327549 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerStarted","Data":"fd6f54d8b8141f1defb6ffa9f013fc364f4f505956775a25aeafc9ab8ecc856c"} Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048263 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7wm25"] Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.048736 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856a8ecd-1cf0-4150-9527-c457571785bd" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048747 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="856a8ecd-1cf0-4150-9527-c457571785bd" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.048769 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803bf161-8aed-4d86-bb34-7664bfa5a21d" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048775 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="803bf161-8aed-4d86-bb34-7664bfa5a21d" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.048783 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f15fe3-9df7-4bd6-8bca-d357f52e458d" containerName="heat-cfnapi" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048790 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f15fe3-9df7-4bd6-8bca-d357f52e458d" containerName="heat-cfnapi" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.048802 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2142964f-61fc-4ae0-af75-f6a72e968294" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048808 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2142964f-61fc-4ae0-af75-f6a72e968294" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.048822 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a467587-eec2-4610-af1d-e666203cdddb" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048828 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a467587-eec2-4610-af1d-e666203cdddb" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.048842 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eaf7dcd-b827-450a-8ac6-9953588f7697" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048848 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eaf7dcd-b827-450a-8ac6-9953588f7697" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.048864 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f77b840-931c-4b69-a2e4-23c7bf19f14e" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048871 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f77b840-931c-4b69-a2e4-23c7bf19f14e" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049066 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="856a8ecd-1cf0-4150-9527-c457571785bd" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049083 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="803bf161-8aed-4d86-bb34-7664bfa5a21d" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049096 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f77b840-931c-4b69-a2e4-23c7bf19f14e" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049106 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2142964f-61fc-4ae0-af75-f6a72e968294" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049122 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eaf7dcd-b827-450a-8ac6-9953588f7697" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049132 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a467587-eec2-4610-af1d-e666203cdddb" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049144 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f15fe3-9df7-4bd6-8bca-d357f52e458d" containerName="heat-cfnapi" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049981 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.056138 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.056402 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9t8nf" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.056283 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.068106 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7wm25"] Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.080968 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-scripts\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.081032 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22jfn\" (UniqueName: \"kubernetes.io/projected/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-kube-api-access-22jfn\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.081160 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-config-data\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.081550 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.183957 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-scripts\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.184198 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22jfn\" (UniqueName: \"kubernetes.io/projected/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-kube-api-access-22jfn\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.184336 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-config-data\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.184928 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.188414 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-scripts\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.188540 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.193020 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-config-data\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.201896 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22jfn\" (UniqueName: \"kubernetes.io/projected/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-kube-api-access-22jfn\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.288005 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.531675 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.531972 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.532892 4761 scope.go:117] "RemoveContainer" containerID="933ac196330b21829577a65f071256ec0e9325a7bc7b21e90aeb49a7a54f997f" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.533351 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6f6989b97c-mlg9v_openstack(f692c15c-b560-4796-97b4-e522c6527322)\"" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" podUID="f692c15c-b560-4796-97b4-e522c6527322" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.547241 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.548102 4761 scope.go:117] "RemoveContainer" containerID="7efe8bfb1109c93a29541ce84a28fd64919c403695a6690db25d7713e18b1d67" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.548399 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5ddf795488-wndb8_openstack(b40d04ab-9269-46e2-b17a-b6f2f8fddb78)\"" pod="openstack/heat-api-5ddf795488-wndb8" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.548744 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.821489 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7wm25"] Mar 07 08:13:41 crc kubenswrapper[4761]: W0307 08:13:41.834523 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2137fb0_1942_4a4d_9ac1_13e43c72ee4a.slice/crio-c1acaeeaaab2096e16cc9363dfc667af3e67b34e58a27631f0cc649eeb5c7b8e WatchSource:0}: Error finding container c1acaeeaaab2096e16cc9363dfc667af3e67b34e58a27631f0cc649eeb5c7b8e: Status 404 returned error can't find the container with id c1acaeeaaab2096e16cc9363dfc667af3e67b34e58a27631f0cc649eeb5c7b8e Mar 07 08:13:42 crc kubenswrapper[4761]: I0307 08:13:42.359502 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7wm25" event={"ID":"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a","Type":"ContainerStarted","Data":"c1acaeeaaab2096e16cc9363dfc667af3e67b34e58a27631f0cc649eeb5c7b8e"} Mar 07 08:13:42 crc kubenswrapper[4761]: I0307 08:13:42.362261 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerStarted","Data":"f0454e0ebdbc29ecc03a93834a7ea67d3d493baec1f7840a0a0ef59dd296e1bb"} Mar 07 08:13:42 crc kubenswrapper[4761]: I0307 08:13:42.363248 4761 scope.go:117] "RemoveContainer" containerID="7efe8bfb1109c93a29541ce84a28fd64919c403695a6690db25d7713e18b1d67" Mar 07 08:13:42 crc kubenswrapper[4761]: E0307 08:13:42.363663 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5ddf795488-wndb8_openstack(b40d04ab-9269-46e2-b17a-b6f2f8fddb78)\"" pod="openstack/heat-api-5ddf795488-wndb8" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" Mar 07 08:13:45 crc kubenswrapper[4761]: I0307 08:13:45.397547 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerStarted","Data":"a843d29a5dcbf50606c95527f681e2a90cb5867feedc49518f7774955c1128d5"} Mar 07 08:13:45 crc kubenswrapper[4761]: I0307 08:13:45.398259 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:13:45 crc kubenswrapper[4761]: I0307 08:13:45.428982 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.653730079 podStartE2EDuration="10.428955135s" podCreationTimestamp="2026-03-07 08:13:35 +0000 UTC" firstStartedPulling="2026-03-07 08:13:37.512868103 +0000 UTC m=+1474.422034578" lastFinishedPulling="2026-03-07 08:13:44.288093169 +0000 UTC m=+1481.197259634" observedRunningTime="2026-03-07 08:13:45.421207119 +0000 UTC m=+1482.330373594" watchObservedRunningTime="2026-03-07 08:13:45.428955135 +0000 UTC m=+1482.338121620" Mar 07 08:13:45 crc kubenswrapper[4761]: I0307 08:13:45.704806 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:45 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:45 crc kubenswrapper[4761]: > Mar 07 08:13:46 crc kubenswrapper[4761]: I0307 08:13:46.463864 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:46 crc kubenswrapper[4761]: I0307 08:13:46.525389 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-fc87bd775-l8cjx"] Mar 07 08:13:46 crc kubenswrapper[4761]: I0307 08:13:46.525604 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-fc87bd775-l8cjx" podUID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" containerName="heat-engine" containerID="cri-o://d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" gracePeriod=60 Mar 07 08:13:46 crc kubenswrapper[4761]: I0307 08:13:46.690506 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.431327 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-central-agent" containerID="cri-o://85dc95220a766c122a682753ac8f6be9951a34865916a2e670889cc3fee86054" gracePeriod=30 Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.431982 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="proxy-httpd" containerID="cri-o://a843d29a5dcbf50606c95527f681e2a90cb5867feedc49518f7774955c1128d5" gracePeriod=30 Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.432052 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="sg-core" containerID="cri-o://f0454e0ebdbc29ecc03a93834a7ea67d3d493baec1f7840a0a0ef59dd296e1bb" gracePeriod=30 Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.432150 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-notification-agent" containerID="cri-o://fd6f54d8b8141f1defb6ffa9f013fc364f4f505956775a25aeafc9ab8ecc856c" gracePeriod=30 Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.739941 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.761600 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.814516 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.871795 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5ddf795488-wndb8"] Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.896281 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6f6989b97c-mlg9v"] Mar 07 08:13:48 crc kubenswrapper[4761]: E0307 08:13:48.007770 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:13:48 crc kubenswrapper[4761]: E0307 08:13:48.031815 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:13:48 crc kubenswrapper[4761]: E0307 08:13:48.033930 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:13:48 crc kubenswrapper[4761]: E0307 08:13:48.033970 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-fc87bd775-l8cjx" podUID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" containerName="heat-engine" Mar 07 08:13:48 crc kubenswrapper[4761]: I0307 08:13:48.464088 4761 generic.go:334] "Generic (PLEG): container finished" podID="94a423ba-64ee-463e-bc87-233d93782eb3" containerID="a843d29a5dcbf50606c95527f681e2a90cb5867feedc49518f7774955c1128d5" exitCode=0 Mar 07 08:13:48 crc kubenswrapper[4761]: I0307 08:13:48.464561 4761 generic.go:334] "Generic (PLEG): container finished" podID="94a423ba-64ee-463e-bc87-233d93782eb3" containerID="f0454e0ebdbc29ecc03a93834a7ea67d3d493baec1f7840a0a0ef59dd296e1bb" exitCode=2 Mar 07 08:13:48 crc kubenswrapper[4761]: I0307 08:13:48.464570 4761 generic.go:334] "Generic (PLEG): container finished" podID="94a423ba-64ee-463e-bc87-233d93782eb3" containerID="fd6f54d8b8141f1defb6ffa9f013fc364f4f505956775a25aeafc9ab8ecc856c" exitCode=0 Mar 07 08:13:48 crc kubenswrapper[4761]: I0307 08:13:48.464597 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerDied","Data":"a843d29a5dcbf50606c95527f681e2a90cb5867feedc49518f7774955c1128d5"} Mar 07 08:13:48 crc kubenswrapper[4761]: I0307 08:13:48.464623 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerDied","Data":"f0454e0ebdbc29ecc03a93834a7ea67d3d493baec1f7840a0a0ef59dd296e1bb"} Mar 07 08:13:48 crc kubenswrapper[4761]: I0307 08:13:48.465057 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerDied","Data":"fd6f54d8b8141f1defb6ffa9f013fc364f4f505956775a25aeafc9ab8ecc856c"} Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.014383 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.020414 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.139534 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data\") pod \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.139692 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data-custom\") pod \"f692c15c-b560-4796-97b4-e522c6527322\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.139731 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-combined-ca-bundle\") pod \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.139757 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgwqc\" (UniqueName: \"kubernetes.io/projected/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-kube-api-access-qgwqc\") pod \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.139868 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-combined-ca-bundle\") pod \"f692c15c-b560-4796-97b4-e522c6527322\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.139895 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p56cr\" (UniqueName: \"kubernetes.io/projected/f692c15c-b560-4796-97b4-e522c6527322-kube-api-access-p56cr\") pod \"f692c15c-b560-4796-97b4-e522c6527322\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.139989 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data-custom\") pod \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.140051 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data\") pod \"f692c15c-b560-4796-97b4-e522c6527322\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.146974 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-kube-api-access-qgwqc" (OuterVolumeSpecName: "kube-api-access-qgwqc") pod "b40d04ab-9269-46e2-b17a-b6f2f8fddb78" (UID: "b40d04ab-9269-46e2-b17a-b6f2f8fddb78"). InnerVolumeSpecName "kube-api-access-qgwqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.148552 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f692c15c-b560-4796-97b4-e522c6527322" (UID: "f692c15c-b560-4796-97b4-e522c6527322"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.148771 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f692c15c-b560-4796-97b4-e522c6527322-kube-api-access-p56cr" (OuterVolumeSpecName: "kube-api-access-p56cr") pod "f692c15c-b560-4796-97b4-e522c6527322" (UID: "f692c15c-b560-4796-97b4-e522c6527322"). InnerVolumeSpecName "kube-api-access-p56cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.160919 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b40d04ab-9269-46e2-b17a-b6f2f8fddb78" (UID: "b40d04ab-9269-46e2-b17a-b6f2f8fddb78"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.199885 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f692c15c-b560-4796-97b4-e522c6527322" (UID: "f692c15c-b560-4796-97b4-e522c6527322"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.211743 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b40d04ab-9269-46e2-b17a-b6f2f8fddb78" (UID: "b40d04ab-9269-46e2-b17a-b6f2f8fddb78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.249320 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgwqc\" (UniqueName: \"kubernetes.io/projected/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-kube-api-access-qgwqc\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.249349 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.249359 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p56cr\" (UniqueName: \"kubernetes.io/projected/f692c15c-b560-4796-97b4-e522c6527322-kube-api-access-p56cr\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.249367 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.249375 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.249383 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.251349 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data" (OuterVolumeSpecName: "config-data") pod "f692c15c-b560-4796-97b4-e522c6527322" (UID: "f692c15c-b560-4796-97b4-e522c6527322"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.272216 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data" (OuterVolumeSpecName: "config-data") pod "b40d04ab-9269-46e2-b17a-b6f2f8fddb78" (UID: "b40d04ab-9269-46e2-b17a-b6f2f8fddb78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.352090 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.352304 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.543378 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" event={"ID":"f692c15c-b560-4796-97b4-e522c6527322","Type":"ContainerDied","Data":"a1b37c09efba40756556d359429c1fe4ea0713e12c8be1a8f692859a757c862e"} Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.543437 4761 scope.go:117] "RemoveContainer" containerID="933ac196330b21829577a65f071256ec0e9325a7bc7b21e90aeb49a7a54f997f" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.543569 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.553976 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5ddf795488-wndb8" event={"ID":"b40d04ab-9269-46e2-b17a-b6f2f8fddb78","Type":"ContainerDied","Data":"3c6a6b40b1679e1b159db93e088ad66d3e754f4102a5f5af9feb1da94e1af613"} Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.554272 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.599732 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6f6989b97c-mlg9v"] Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.616636 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6f6989b97c-mlg9v"] Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.632574 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5ddf795488-wndb8"] Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.644838 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5ddf795488-wndb8"] Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.766117 4761 scope.go:117] "RemoveContainer" containerID="7efe8bfb1109c93a29541ce84a28fd64919c403695a6690db25d7713e18b1d67" Mar 07 08:13:55 crc kubenswrapper[4761]: I0307 08:13:55.569459 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7wm25" event={"ID":"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a","Type":"ContainerStarted","Data":"42e5660165444ca6df91dbb38ff4e23b3096c7787fc5e04b8ca5bb536be08a99"} Mar 07 08:13:55 crc kubenswrapper[4761]: I0307 08:13:55.597829 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-7wm25" podStartSLOduration=1.5924374669999999 podStartE2EDuration="14.597810836s" podCreationTimestamp="2026-03-07 08:13:41 +0000 UTC" firstStartedPulling="2026-03-07 08:13:41.836993074 +0000 UTC m=+1478.746159549" lastFinishedPulling="2026-03-07 08:13:54.842366443 +0000 UTC m=+1491.751532918" observedRunningTime="2026-03-07 08:13:55.591792366 +0000 UTC m=+1492.500958861" watchObservedRunningTime="2026-03-07 08:13:55.597810836 +0000 UTC m=+1492.506977311" Mar 07 08:13:55 crc kubenswrapper[4761]: I0307 08:13:55.690423 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:55 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:55 crc kubenswrapper[4761]: > Mar 07 08:13:55 crc kubenswrapper[4761]: I0307 08:13:55.724999 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" path="/var/lib/kubelet/pods/b40d04ab-9269-46e2-b17a-b6f2f8fddb78/volumes" Mar 07 08:13:55 crc kubenswrapper[4761]: I0307 08:13:55.725773 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f692c15c-b560-4796-97b4-e522c6527322" path="/var/lib/kubelet/pods/f692c15c-b560-4796-97b4-e522c6527322/volumes" Mar 07 08:13:57 crc kubenswrapper[4761]: I0307 08:13:57.146175 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.217:3000/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 08:13:57 crc kubenswrapper[4761]: I0307 08:13:57.595958 4761 generic.go:334] "Generic (PLEG): container finished" podID="94a423ba-64ee-463e-bc87-233d93782eb3" containerID="85dc95220a766c122a682753ac8f6be9951a34865916a2e670889cc3fee86054" exitCode=0 Mar 07 08:13:57 crc kubenswrapper[4761]: I0307 08:13:57.595995 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerDied","Data":"85dc95220a766c122a682753ac8f6be9951a34865916a2e670889cc3fee86054"} Mar 07 08:13:57 crc kubenswrapper[4761]: I0307 08:13:57.985695 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.003150 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.004842 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.010953 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.011163 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-fc87bd775-l8cjx" podUID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" containerName="heat-engine" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.156869 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-run-httpd\") pod \"94a423ba-64ee-463e-bc87-233d93782eb3\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.157316 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4tbf\" (UniqueName: \"kubernetes.io/projected/94a423ba-64ee-463e-bc87-233d93782eb3-kube-api-access-f4tbf\") pod \"94a423ba-64ee-463e-bc87-233d93782eb3\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.157381 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-combined-ca-bundle\") pod \"94a423ba-64ee-463e-bc87-233d93782eb3\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.157401 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-log-httpd\") pod \"94a423ba-64ee-463e-bc87-233d93782eb3\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.157548 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-scripts\") pod \"94a423ba-64ee-463e-bc87-233d93782eb3\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.157633 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-config-data\") pod \"94a423ba-64ee-463e-bc87-233d93782eb3\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.157622 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "94a423ba-64ee-463e-bc87-233d93782eb3" (UID: "94a423ba-64ee-463e-bc87-233d93782eb3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.157668 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-sg-core-conf-yaml\") pod \"94a423ba-64ee-463e-bc87-233d93782eb3\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.158008 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "94a423ba-64ee-463e-bc87-233d93782eb3" (UID: "94a423ba-64ee-463e-bc87-233d93782eb3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.158331 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.158344 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.165985 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a423ba-64ee-463e-bc87-233d93782eb3-kube-api-access-f4tbf" (OuterVolumeSpecName: "kube-api-access-f4tbf") pod "94a423ba-64ee-463e-bc87-233d93782eb3" (UID: "94a423ba-64ee-463e-bc87-233d93782eb3"). InnerVolumeSpecName "kube-api-access-f4tbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.166338 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-scripts" (OuterVolumeSpecName: "scripts") pod "94a423ba-64ee-463e-bc87-233d93782eb3" (UID: "94a423ba-64ee-463e-bc87-233d93782eb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.203598 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "94a423ba-64ee-463e-bc87-233d93782eb3" (UID: "94a423ba-64ee-463e-bc87-233d93782eb3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.261302 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.261338 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4tbf\" (UniqueName: \"kubernetes.io/projected/94a423ba-64ee-463e-bc87-233d93782eb3-kube-api-access-f4tbf\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.261351 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.264177 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94a423ba-64ee-463e-bc87-233d93782eb3" (UID: "94a423ba-64ee-463e-bc87-233d93782eb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.283074 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-config-data" (OuterVolumeSpecName: "config-data") pod "94a423ba-64ee-463e-bc87-233d93782eb3" (UID: "94a423ba-64ee-463e-bc87-233d93782eb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.362586 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.362623 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.617767 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerDied","Data":"faa12ada32ffbc2f463162b9872a776d586f459f95670a09363356876647efb3"} Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.617820 4761 scope.go:117] "RemoveContainer" containerID="a843d29a5dcbf50606c95527f681e2a90cb5867feedc49518f7774955c1128d5" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.617983 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.684737 4761 scope.go:117] "RemoveContainer" containerID="f0454e0ebdbc29ecc03a93834a7ea67d3d493baec1f7840a0a0ef59dd296e1bb" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.689252 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.723082 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.736880 4761 scope.go:117] "RemoveContainer" containerID="fd6f54d8b8141f1defb6ffa9f013fc364f4f505956775a25aeafc9ab8ecc856c" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742032 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742511 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-central-agent" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742524 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-central-agent" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742541 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerName="heat-api" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742548 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerName="heat-api" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742575 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f692c15c-b560-4796-97b4-e522c6527322" containerName="heat-cfnapi" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742580 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f692c15c-b560-4796-97b4-e522c6527322" containerName="heat-cfnapi" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742592 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerName="heat-api" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742598 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerName="heat-api" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742610 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-notification-agent" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742618 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-notification-agent" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742635 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="proxy-httpd" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742641 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="proxy-httpd" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742650 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f692c15c-b560-4796-97b4-e522c6527322" containerName="heat-cfnapi" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742655 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f692c15c-b560-4796-97b4-e522c6527322" containerName="heat-cfnapi" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742670 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="sg-core" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742678 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="sg-core" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742966 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f692c15c-b560-4796-97b4-e522c6527322" containerName="heat-cfnapi" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742981 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-notification-agent" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742992 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="proxy-httpd" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742999 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f692c15c-b560-4796-97b4-e522c6527322" containerName="heat-cfnapi" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.743007 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="sg-core" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.743028 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerName="heat-api" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.743040 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerName="heat-api" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.743051 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-central-agent" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.745084 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.747967 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.748125 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.752576 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.760305 4761 scope.go:117] "RemoveContainer" containerID="85dc95220a766c122a682753ac8f6be9951a34865916a2e670889cc3fee86054" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.857024 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94a423ba_64ee_463e_bc87_233d93782eb3.slice/crio-faa12ada32ffbc2f463162b9872a776d586f459f95670a09363356876647efb3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94a423ba_64ee_463e_bc87_233d93782eb3.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.873322 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-config-data\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.874086 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-run-httpd\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.874227 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.874374 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.874527 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-log-httpd\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.874575 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-scripts\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.874612 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh7xz\" (UniqueName: \"kubernetes.io/projected/b74195a9-43f5-4734-85dd-7092de0c7644-kube-api-access-kh7xz\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.977794 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-config-data\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.977914 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-run-httpd\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.977994 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.978033 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.978125 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-log-httpd\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.978161 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-scripts\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.978186 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh7xz\" (UniqueName: \"kubernetes.io/projected/b74195a9-43f5-4734-85dd-7092de0c7644-kube-api-access-kh7xz\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.978615 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-log-httpd\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.978615 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-run-httpd\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.983514 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.983570 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.984294 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-scripts\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.994095 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh7xz\" (UniqueName: \"kubernetes.io/projected/b74195a9-43f5-4734-85dd-7092de0c7644-kube-api-access-kh7xz\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.999590 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-config-data\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:59 crc kubenswrapper[4761]: I0307 08:13:59.091976 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:59 crc kubenswrapper[4761]: I0307 08:13:59.644084 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:59 crc kubenswrapper[4761]: I0307 08:13:59.730528 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" path="/var/lib/kubelet/pods/94a423ba-64ee-463e-bc87-233d93782eb3/volumes" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.181518 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547854-b54w4"] Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.184003 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547854-b54w4" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.196945 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.197416 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.197609 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.200689 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547854-b54w4"] Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.229306 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwgww\" (UniqueName: \"kubernetes.io/projected/fa559f07-f757-48aa-91d6-8408654be6fb-kube-api-access-hwgww\") pod \"auto-csr-approver-29547854-b54w4\" (UID: \"fa559f07-f757-48aa-91d6-8408654be6fb\") " pod="openshift-infra/auto-csr-approver-29547854-b54w4" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.331908 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwgww\" (UniqueName: \"kubernetes.io/projected/fa559f07-f757-48aa-91d6-8408654be6fb-kube-api-access-hwgww\") pod \"auto-csr-approver-29547854-b54w4\" (UID: \"fa559f07-f757-48aa-91d6-8408654be6fb\") " pod="openshift-infra/auto-csr-approver-29547854-b54w4" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.353313 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwgww\" (UniqueName: \"kubernetes.io/projected/fa559f07-f757-48aa-91d6-8408654be6fb-kube-api-access-hwgww\") pod \"auto-csr-approver-29547854-b54w4\" (UID: \"fa559f07-f757-48aa-91d6-8408654be6fb\") " pod="openshift-infra/auto-csr-approver-29547854-b54w4" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.656394 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547854-b54w4" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.682026 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerStarted","Data":"e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd"} Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.682073 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerStarted","Data":"0891b5bcb540cf5685e1d4a26eab5e1d7d47956c6e7321db234953d82cb51a16"} Mar 07 08:14:01 crc kubenswrapper[4761]: I0307 08:14:01.321937 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547854-b54w4"] Mar 07 08:14:01 crc kubenswrapper[4761]: W0307 08:14:01.334935 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa559f07_f757_48aa_91d6_8408654be6fb.slice/crio-850cdfa2374f7f28be423ba15f758e3326a9481c7bba2f9f5d6b12008cb544b0 WatchSource:0}: Error finding container 850cdfa2374f7f28be423ba15f758e3326a9481c7bba2f9f5d6b12008cb544b0: Status 404 returned error can't find the container with id 850cdfa2374f7f28be423ba15f758e3326a9481c7bba2f9f5d6b12008cb544b0 Mar 07 08:14:01 crc kubenswrapper[4761]: I0307 08:14:01.694595 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerStarted","Data":"37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8"} Mar 07 08:14:01 crc kubenswrapper[4761]: I0307 08:14:01.697110 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547854-b54w4" event={"ID":"fa559f07-f757-48aa-91d6-8408654be6fb","Type":"ContainerStarted","Data":"850cdfa2374f7f28be423ba15f758e3326a9481c7bba2f9f5d6b12008cb544b0"} Mar 07 08:14:01 crc kubenswrapper[4761]: I0307 08:14:01.809461 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:14:01 crc kubenswrapper[4761]: I0307 08:14:01.810365 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-log" containerID="cri-o://b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84" gracePeriod=30 Mar 07 08:14:01 crc kubenswrapper[4761]: I0307 08:14:01.810430 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-httpd" containerID="cri-o://18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20" gracePeriod=30 Mar 07 08:14:02 crc kubenswrapper[4761]: I0307 08:14:02.740173 4761 generic.go:334] "Generic (PLEG): container finished" podID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerID="b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84" exitCode=143 Mar 07 08:14:02 crc kubenswrapper[4761]: I0307 08:14:02.740775 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05b0e93e-5cbe-4e36-ada4-ff90ea710789","Type":"ContainerDied","Data":"b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84"} Mar 07 08:14:02 crc kubenswrapper[4761]: I0307 08:14:02.770872 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerStarted","Data":"af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623"} Mar 07 08:14:02 crc kubenswrapper[4761]: I0307 08:14:02.782149 4761 generic.go:334] "Generic (PLEG): container finished" podID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" exitCode=0 Mar 07 08:14:02 crc kubenswrapper[4761]: I0307 08:14:02.782206 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-fc87bd775-l8cjx" event={"ID":"26d13a5f-64b5-41e8-a74f-1c46a4f38dad","Type":"ContainerDied","Data":"d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9"} Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.071023 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.133874 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data\") pod \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.133954 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckcgq\" (UniqueName: \"kubernetes.io/projected/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-kube-api-access-ckcgq\") pod \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.134127 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data-custom\") pod \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.134314 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-combined-ca-bundle\") pod \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.143193 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "26d13a5f-64b5-41e8-a74f-1c46a4f38dad" (UID: "26d13a5f-64b5-41e8-a74f-1c46a4f38dad"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.156188 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-kube-api-access-ckcgq" (OuterVolumeSpecName: "kube-api-access-ckcgq") pod "26d13a5f-64b5-41e8-a74f-1c46a4f38dad" (UID: "26d13a5f-64b5-41e8-a74f-1c46a4f38dad"). InnerVolumeSpecName "kube-api-access-ckcgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.195827 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26d13a5f-64b5-41e8-a74f-1c46a4f38dad" (UID: "26d13a5f-64b5-41e8-a74f-1c46a4f38dad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.237321 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.237357 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckcgq\" (UniqueName: \"kubernetes.io/projected/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-kube-api-access-ckcgq\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.237368 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.297990 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data" (OuterVolumeSpecName: "config-data") pod "26d13a5f-64b5-41e8-a74f-1c46a4f38dad" (UID: "26d13a5f-64b5-41e8-a74f-1c46a4f38dad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.305254 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.305536 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-log" containerID="cri-o://f0fc1f72a75b1539d68a0602b27e008732b428b7fe5f595bd67a5269690ae4c1" gracePeriod=30 Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.305698 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-httpd" containerID="cri-o://a66fd390bbf68a3e4ff357dfdc728b5dbac9c698af22e5a0692112931c9003d1" gracePeriod=30 Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.339440 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.811536 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547854-b54w4" event={"ID":"fa559f07-f757-48aa-91d6-8408654be6fb","Type":"ContainerStarted","Data":"eadae3021f65aa1e1112361e2bcf5fc4f2eda7c5b0d47eff67c1e186e5afd8b1"} Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.878774 4761 generic.go:334] "Generic (PLEG): container finished" podID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerID="f0fc1f72a75b1539d68a0602b27e008732b428b7fe5f595bd67a5269690ae4c1" exitCode=143 Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.878903 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a592362d-7e1a-4be8-9dc7-84ee7a6170db","Type":"ContainerDied","Data":"f0fc1f72a75b1539d68a0602b27e008732b428b7fe5f595bd67a5269690ae4c1"} Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.882948 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-fc87bd775-l8cjx" event={"ID":"26d13a5f-64b5-41e8-a74f-1c46a4f38dad","Type":"ContainerDied","Data":"18c8621e9c8c6855be61ecdbb44efba4c8635cfc8bca4aede6e4347459299c55"} Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.882992 4761 scope.go:117] "RemoveContainer" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.883146 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.923274 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547854-b54w4" podStartSLOduration=2.85330217 podStartE2EDuration="3.923253888s" podCreationTimestamp="2026-03-07 08:14:00 +0000 UTC" firstStartedPulling="2026-03-07 08:14:01.338323521 +0000 UTC m=+1498.247489996" lastFinishedPulling="2026-03-07 08:14:02.408275239 +0000 UTC m=+1499.317441714" observedRunningTime="2026-03-07 08:14:03.896841799 +0000 UTC m=+1500.806008274" watchObservedRunningTime="2026-03-07 08:14:03.923253888 +0000 UTC m=+1500.832420363" Mar 07 08:14:04 crc kubenswrapper[4761]: I0307 08:14:04.068314 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-fc87bd775-l8cjx"] Mar 07 08:14:04 crc kubenswrapper[4761]: I0307 08:14:04.111067 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-fc87bd775-l8cjx"] Mar 07 08:14:04 crc kubenswrapper[4761]: I0307 08:14:04.897690 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerStarted","Data":"8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19"} Mar 07 08:14:04 crc kubenswrapper[4761]: I0307 08:14:04.898207 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:14:04 crc kubenswrapper[4761]: I0307 08:14:04.899502 4761 generic.go:334] "Generic (PLEG): container finished" podID="fa559f07-f757-48aa-91d6-8408654be6fb" containerID="eadae3021f65aa1e1112361e2bcf5fc4f2eda7c5b0d47eff67c1e186e5afd8b1" exitCode=0 Mar 07 08:14:04 crc kubenswrapper[4761]: I0307 08:14:04.899591 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547854-b54w4" event={"ID":"fa559f07-f757-48aa-91d6-8408654be6fb","Type":"ContainerDied","Data":"eadae3021f65aa1e1112361e2bcf5fc4f2eda7c5b0d47eff67c1e186e5afd8b1"} Mar 07 08:14:04 crc kubenswrapper[4761]: I0307 08:14:04.922545 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.677544279 podStartE2EDuration="6.922524692s" podCreationTimestamp="2026-03-07 08:13:58 +0000 UTC" firstStartedPulling="2026-03-07 08:13:59.6371694 +0000 UTC m=+1496.546335865" lastFinishedPulling="2026-03-07 08:14:03.882149803 +0000 UTC m=+1500.791316278" observedRunningTime="2026-03-07 08:14:04.916220115 +0000 UTC m=+1501.825386600" watchObservedRunningTime="2026-03-07 08:14:04.922524692 +0000 UTC m=+1501.831691167" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.316040 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.570010 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.681783 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:14:05 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:14:05 crc kubenswrapper[4761]: > Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.724960 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" path="/var/lib/kubelet/pods/26d13a5f-64b5-41e8-a74f-1c46a4f38dad/volumes" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.733076 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-combined-ca-bundle\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.733633 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.733711 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwg2n\" (UniqueName: \"kubernetes.io/projected/05b0e93e-5cbe-4e36-ada4-ff90ea710789-kube-api-access-rwg2n\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.733760 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-logs\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.734931 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-logs" (OuterVolumeSpecName: "logs") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.736006 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-public-tls-certs\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.736209 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-httpd-run\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.736310 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-scripts\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.736347 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-config-data\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.737015 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.737423 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.737441 4761 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.742738 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b0e93e-5cbe-4e36-ada4-ff90ea710789-kube-api-access-rwg2n" (OuterVolumeSpecName: "kube-api-access-rwg2n") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "kube-api-access-rwg2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.747924 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-scripts" (OuterVolumeSpecName: "scripts") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.774188 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1" (OuterVolumeSpecName: "glance") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "pvc-652e25a4-1797-4881-8c1b-50f95fd356e1". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.801133 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.839935 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.839973 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.839999 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") on node \"crc\" " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.840015 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwg2n\" (UniqueName: \"kubernetes.io/projected/05b0e93e-5cbe-4e36-ada4-ff90ea710789-kube-api-access-rwg2n\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.847124 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.847902 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-config-data" (OuterVolumeSpecName: "config-data") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.854381 4761 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod321917f1-f061-4e00-a598-2766772d2290"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod321917f1-f061-4e00-a598-2766772d2290] : Timed out while waiting for systemd to remove kubepods-burstable-pod321917f1_f061_4e00_a598_2766772d2290.slice" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.884709 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.884861 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-652e25a4-1797-4881-8c1b-50f95fd356e1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1") on node "crc" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.912261 4761 generic.go:334] "Generic (PLEG): container finished" podID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerID="18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20" exitCode=0 Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.912318 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.912335 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05b0e93e-5cbe-4e36-ada4-ff90ea710789","Type":"ContainerDied","Data":"18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20"} Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.912385 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05b0e93e-5cbe-4e36-ada4-ff90ea710789","Type":"ContainerDied","Data":"5f99fe4eaa0d6654572f8474c020d7e045645f945574566ab31bfb408d79ce3e"} Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.912409 4761 scope.go:117] "RemoveContainer" containerID="18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.942135 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.942167 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.942177 4761 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.945622 4761 scope.go:117] "RemoveContainer" containerID="b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.956182 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.981785 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.998928 4761 scope.go:117] "RemoveContainer" containerID="18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20" Mar 07 08:14:06 crc kubenswrapper[4761]: E0307 08:14:06.001261 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20\": container with ID starting with 18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20 not found: ID does not exist" containerID="18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.001293 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20"} err="failed to get container status \"18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20\": rpc error: code = NotFound desc = could not find container \"18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20\": container with ID starting with 18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20 not found: ID does not exist" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.001316 4761 scope.go:117] "RemoveContainer" containerID="b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84" Mar 07 08:14:06 crc kubenswrapper[4761]: E0307 08:14:06.004750 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84\": container with ID starting with b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84 not found: ID does not exist" containerID="b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.004774 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84"} err="failed to get container status \"b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84\": rpc error: code = NotFound desc = could not find container \"b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84\": container with ID starting with b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84 not found: ID does not exist" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.010730 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:14:06 crc kubenswrapper[4761]: E0307 08:14:06.011227 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-log" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.011244 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-log" Mar 07 08:14:06 crc kubenswrapper[4761]: E0307 08:14:06.011280 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-httpd" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.011286 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-httpd" Mar 07 08:14:06 crc kubenswrapper[4761]: E0307 08:14:06.011305 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" containerName="heat-engine" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.011311 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" containerName="heat-engine" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.011499 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-httpd" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.011519 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" containerName="heat-engine" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.011532 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-log" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.012756 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.020010 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.020257 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.024520 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159119 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159177 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f78969ff-e84a-4fed-8d3d-21688ae544c7-logs\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159266 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-config-data\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159338 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f78969ff-e84a-4fed-8d3d-21688ae544c7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159382 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srtc5\" (UniqueName: \"kubernetes.io/projected/f78969ff-e84a-4fed-8d3d-21688ae544c7-kube-api-access-srtc5\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159427 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159449 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159481 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-scripts\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.261668 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.261754 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.261793 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-scripts\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.261850 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.261880 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f78969ff-e84a-4fed-8d3d-21688ae544c7-logs\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.261948 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-config-data\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.262009 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f78969ff-e84a-4fed-8d3d-21688ae544c7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.262042 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srtc5\" (UniqueName: \"kubernetes.io/projected/f78969ff-e84a-4fed-8d3d-21688ae544c7-kube-api-access-srtc5\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.262761 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f78969ff-e84a-4fed-8d3d-21688ae544c7-logs\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.262955 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f78969ff-e84a-4fed-8d3d-21688ae544c7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.265508 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.265538 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f1ce1c096842b9627111c5f89fad26fafb9d1f61d1f48c8efc1ee653de0d59a3/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.267387 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.267755 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.267771 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-scripts\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.268936 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-config-data\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.289824 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srtc5\" (UniqueName: \"kubernetes.io/projected/f78969ff-e84a-4fed-8d3d-21688ae544c7-kube-api-access-srtc5\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.328553 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.436549 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.684792 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547854-b54w4" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.775403 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwgww\" (UniqueName: \"kubernetes.io/projected/fa559f07-f757-48aa-91d6-8408654be6fb-kube-api-access-hwgww\") pod \"fa559f07-f757-48aa-91d6-8408654be6fb\" (UID: \"fa559f07-f757-48aa-91d6-8408654be6fb\") " Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.781440 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa559f07-f757-48aa-91d6-8408654be6fb-kube-api-access-hwgww" (OuterVolumeSpecName: "kube-api-access-hwgww") pod "fa559f07-f757-48aa-91d6-8408654be6fb" (UID: "fa559f07-f757-48aa-91d6-8408654be6fb"). InnerVolumeSpecName "kube-api-access-hwgww". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.879261 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwgww\" (UniqueName: \"kubernetes.io/projected/fa559f07-f757-48aa-91d6-8408654be6fb-kube-api-access-hwgww\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.899417 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547848-qbkn8"] Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.911865 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547848-qbkn8"] Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.927648 4761 generic.go:334] "Generic (PLEG): container finished" podID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerID="a66fd390bbf68a3e4ff357dfdc728b5dbac9c698af22e5a0692112931c9003d1" exitCode=0 Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.927774 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a592362d-7e1a-4be8-9dc7-84ee7a6170db","Type":"ContainerDied","Data":"a66fd390bbf68a3e4ff357dfdc728b5dbac9c698af22e5a0692112931c9003d1"} Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.930125 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547854-b54w4" event={"ID":"fa559f07-f757-48aa-91d6-8408654be6fb","Type":"ContainerDied","Data":"850cdfa2374f7f28be423ba15f758e3326a9481c7bba2f9f5d6b12008cb544b0"} Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.930182 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="850cdfa2374f7f28be423ba15f758e3326a9481c7bba2f9f5d6b12008cb544b0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.930234 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-central-agent" containerID="cri-o://e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd" gracePeriod=30 Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.930280 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547854-b54w4" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.930436 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="proxy-httpd" containerID="cri-o://8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19" gracePeriod=30 Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.930488 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="sg-core" containerID="cri-o://af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623" gracePeriod=30 Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.930519 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-notification-agent" containerID="cri-o://37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8" gracePeriod=30 Mar 07 08:14:07 crc kubenswrapper[4761]: W0307 08:14:07.146843 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf78969ff_e84a_4fed_8d3d_21688ae544c7.slice/crio-97801a3200e1c15bd07be5e314d3fb6be639a5237d266c6737696a01fd0788ee WatchSource:0}: Error finding container 97801a3200e1c15bd07be5e314d3fb6be639a5237d266c6737696a01fd0788ee: Status 404 returned error can't find the container with id 97801a3200e1c15bd07be5e314d3fb6be639a5237d266c6737696a01fd0788ee Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.161428 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.318889 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.397434 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-httpd-run\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.397594 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-scripts\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.397652 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpkhv\" (UniqueName: \"kubernetes.io/projected/a592362d-7e1a-4be8-9dc7-84ee7a6170db-kube-api-access-qpkhv\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.397697 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-logs\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.397789 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-combined-ca-bundle\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.397856 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-config-data\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.398003 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-internal-tls-certs\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.398055 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.398307 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-logs" (OuterVolumeSpecName: "logs") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.398617 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.400447 4761 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.400467 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.403790 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a592362d-7e1a-4be8-9dc7-84ee7a6170db-kube-api-access-qpkhv" (OuterVolumeSpecName: "kube-api-access-qpkhv") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "kube-api-access-qpkhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.409798 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-scripts" (OuterVolumeSpecName: "scripts") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.447272 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b" (OuterVolumeSpecName: "glance") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.450165 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.482941 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.496967 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-config-data" (OuterVolumeSpecName: "config-data") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.504674 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.504724 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.504761 4761 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.504793 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") on node \"crc\" " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.504806 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.504816 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpkhv\" (UniqueName: \"kubernetes.io/projected/a592362d-7e1a-4be8-9dc7-84ee7a6170db-kube-api-access-qpkhv\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.569255 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.569655 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b") on node "crc" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.606861 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.725230 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" path="/var/lib/kubelet/pods/05b0e93e-5cbe-4e36-ada4-ff90ea710789/volumes" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.729931 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d7b5c4-c016-498d-bc33-0b7c52cb7504" path="/var/lib/kubelet/pods/91d7b5c4-c016-498d-bc33-0b7c52cb7504/volumes" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.944272 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.944268 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a592362d-7e1a-4be8-9dc7-84ee7a6170db","Type":"ContainerDied","Data":"971da60208d3b6ab528e27a23204c4e439302fa13aa18c215aa3e84d3072a45f"} Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.944442 4761 scope.go:117] "RemoveContainer" containerID="a66fd390bbf68a3e4ff357dfdc728b5dbac9c698af22e5a0692112931c9003d1" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.947383 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f78969ff-e84a-4fed-8d3d-21688ae544c7","Type":"ContainerStarted","Data":"97801a3200e1c15bd07be5e314d3fb6be639a5237d266c6737696a01fd0788ee"} Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.952347 4761 generic.go:334] "Generic (PLEG): container finished" podID="b74195a9-43f5-4734-85dd-7092de0c7644" containerID="8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19" exitCode=0 Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.952383 4761 generic.go:334] "Generic (PLEG): container finished" podID="b74195a9-43f5-4734-85dd-7092de0c7644" containerID="af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623" exitCode=2 Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.952393 4761 generic.go:334] "Generic (PLEG): container finished" podID="b74195a9-43f5-4734-85dd-7092de0c7644" containerID="37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8" exitCode=0 Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.952418 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerDied","Data":"8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19"} Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.952444 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerDied","Data":"af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623"} Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.952457 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerDied","Data":"37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8"} Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.975467 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.992329 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.005074 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:14:08 crc kubenswrapper[4761]: E0307 08:14:08.005708 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-httpd" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.005749 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-httpd" Mar 07 08:14:08 crc kubenswrapper[4761]: E0307 08:14:08.005768 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa559f07-f757-48aa-91d6-8408654be6fb" containerName="oc" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.005776 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa559f07-f757-48aa-91d6-8408654be6fb" containerName="oc" Mar 07 08:14:08 crc kubenswrapper[4761]: E0307 08:14:08.005789 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-log" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.005796 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-log" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.006045 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa559f07-f757-48aa-91d6-8408654be6fb" containerName="oc" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.006083 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-httpd" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.006111 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-log" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.007576 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.010132 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.010179 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.035463 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.113827 4761 scope.go:117] "RemoveContainer" containerID="f0fc1f72a75b1539d68a0602b27e008732b428b7fe5f595bd67a5269690ae4c1" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.122842 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.122893 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.122938 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.123039 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7dfba149-bd76-4537-a488-ef2606ba2d9b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.123086 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.123291 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dfba149-bd76-4537-a488-ef2606ba2d9b-logs\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.123337 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.123469 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2k7h\" (UniqueName: \"kubernetes.io/projected/7dfba149-bd76-4537-a488-ef2606ba2d9b-kube-api-access-h2k7h\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225364 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2k7h\" (UniqueName: \"kubernetes.io/projected/7dfba149-bd76-4537-a488-ef2606ba2d9b-kube-api-access-h2k7h\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225486 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225518 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225636 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7dfba149-bd76-4537-a488-ef2606ba2d9b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225685 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225811 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dfba149-bd76-4537-a488-ef2606ba2d9b-logs\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225838 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.226766 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7dfba149-bd76-4537-a488-ef2606ba2d9b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.227259 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dfba149-bd76-4537-a488-ef2606ba2d9b-logs\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.231464 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.232004 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.238771 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.241534 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2k7h\" (UniqueName: \"kubernetes.io/projected/7dfba149-bd76-4537-a488-ef2606ba2d9b-kube-api-access-h2k7h\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.242112 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.246832 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.246877 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/851ce73d1b192d58f34aae6f8e819bd73d3fa6a2538f169362f333663b0c473e/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.302549 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.333044 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:09 crc kubenswrapper[4761]: I0307 08:14:09.950309 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" path="/var/lib/kubelet/pods/a592362d-7e1a-4be8-9dc7-84ee7a6170db/volumes" Mar 07 08:14:09 crc kubenswrapper[4761]: I0307 08:14:09.975776 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f78969ff-e84a-4fed-8d3d-21688ae544c7","Type":"ContainerStarted","Data":"1428f12993cbbd9edb6c6315d7248c1f1ac494f81c93d24667c7ce0cfd42bbe6"} Mar 07 08:14:10 crc kubenswrapper[4761]: W0307 08:14:10.404311 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dfba149_bd76_4537_a488_ef2606ba2d9b.slice/crio-e0cc1cadfde94bd8cb8a3fa3da8e75590ebdc3bd3dd9963acacd93ce43ea2a8d WatchSource:0}: Error finding container e0cc1cadfde94bd8cb8a3fa3da8e75590ebdc3bd3dd9963acacd93ce43ea2a8d: Status 404 returned error can't find the container with id e0cc1cadfde94bd8cb8a3fa3da8e75590ebdc3bd3dd9963acacd93ce43ea2a8d Mar 07 08:14:10 crc kubenswrapper[4761]: I0307 08:14:10.407430 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:14:10 crc kubenswrapper[4761]: I0307 08:14:10.991831 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7dfba149-bd76-4537-a488-ef2606ba2d9b","Type":"ContainerStarted","Data":"e0cc1cadfde94bd8cb8a3fa3da8e75590ebdc3bd3dd9963acacd93ce43ea2a8d"} Mar 07 08:14:10 crc kubenswrapper[4761]: I0307 08:14:10.994251 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f78969ff-e84a-4fed-8d3d-21688ae544c7","Type":"ContainerStarted","Data":"729d2cd97f58e5df753012e293b5fea513f05178252f7355a8b804e80f006779"} Mar 07 08:14:11 crc kubenswrapper[4761]: I0307 08:14:11.022356 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.0223352 podStartE2EDuration="6.0223352s" podCreationTimestamp="2026-03-07 08:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:11.015408437 +0000 UTC m=+1507.924574912" watchObservedRunningTime="2026-03-07 08:14:11.0223352 +0000 UTC m=+1507.931501675" Mar 07 08:14:12 crc kubenswrapper[4761]: I0307 08:14:12.007384 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7dfba149-bd76-4537-a488-ef2606ba2d9b","Type":"ContainerStarted","Data":"18e2e0fcd6c260b3338e36e4f01d4e5fdd3e191103431ff84740531ab7d85fa0"} Mar 07 08:14:12 crc kubenswrapper[4761]: I0307 08:14:12.007919 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7dfba149-bd76-4537-a488-ef2606ba2d9b","Type":"ContainerStarted","Data":"830e0c367297c1825d3ef1d2a80a55b1ec2302f13c2d1f497d456175fe789a7d"} Mar 07 08:14:12 crc kubenswrapper[4761]: I0307 08:14:12.047364 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.047339306 podStartE2EDuration="5.047339306s" podCreationTimestamp="2026-03-07 08:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:12.030920397 +0000 UTC m=+1508.940086892" watchObservedRunningTime="2026-03-07 08:14:12.047339306 +0000 UTC m=+1508.956505781" Mar 07 08:14:12 crc kubenswrapper[4761]: I0307 08:14:12.888323 4761 scope.go:117] "RemoveContainer" containerID="90780c6769e50eb25ac4414322be19d0fb66add72262a799352d6b815dedb419" Mar 07 08:14:13 crc kubenswrapper[4761]: I0307 08:14:13.769001 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:14:13 crc kubenswrapper[4761]: I0307 08:14:13.769631 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:14:15 crc kubenswrapper[4761]: I0307 08:14:15.684356 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:14:15 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:14:15 crc kubenswrapper[4761]: > Mar 07 08:14:16 crc kubenswrapper[4761]: I0307 08:14:16.436941 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 07 08:14:16 crc kubenswrapper[4761]: I0307 08:14:16.437467 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 07 08:14:16 crc kubenswrapper[4761]: I0307 08:14:16.477825 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 07 08:14:16 crc kubenswrapper[4761]: I0307 08:14:16.487021 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.077267 4761 generic.go:334] "Generic (PLEG): container finished" podID="f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" containerID="42e5660165444ca6df91dbb38ff4e23b3096c7787fc5e04b8ca5bb536be08a99" exitCode=0 Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.077385 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7wm25" event={"ID":"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a","Type":"ContainerDied","Data":"42e5660165444ca6df91dbb38ff4e23b3096c7787fc5e04b8ca5bb536be08a99"} Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.078398 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.078434 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.922482 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.970107 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-combined-ca-bundle\") pod \"b74195a9-43f5-4734-85dd-7092de0c7644\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.970222 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-sg-core-conf-yaml\") pod \"b74195a9-43f5-4734-85dd-7092de0c7644\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.970255 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-run-httpd\") pod \"b74195a9-43f5-4734-85dd-7092de0c7644\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.970418 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-config-data\") pod \"b74195a9-43f5-4734-85dd-7092de0c7644\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.970510 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-log-httpd\") pod \"b74195a9-43f5-4734-85dd-7092de0c7644\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.970650 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh7xz\" (UniqueName: \"kubernetes.io/projected/b74195a9-43f5-4734-85dd-7092de0c7644-kube-api-access-kh7xz\") pod \"b74195a9-43f5-4734-85dd-7092de0c7644\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.970685 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-scripts\") pod \"b74195a9-43f5-4734-85dd-7092de0c7644\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.973254 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b74195a9-43f5-4734-85dd-7092de0c7644" (UID: "b74195a9-43f5-4734-85dd-7092de0c7644"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.973459 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b74195a9-43f5-4734-85dd-7092de0c7644" (UID: "b74195a9-43f5-4734-85dd-7092de0c7644"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.981988 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-scripts" (OuterVolumeSpecName: "scripts") pod "b74195a9-43f5-4734-85dd-7092de0c7644" (UID: "b74195a9-43f5-4734-85dd-7092de0c7644"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.982047 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b74195a9-43f5-4734-85dd-7092de0c7644-kube-api-access-kh7xz" (OuterVolumeSpecName: "kube-api-access-kh7xz") pod "b74195a9-43f5-4734-85dd-7092de0c7644" (UID: "b74195a9-43f5-4734-85dd-7092de0c7644"). InnerVolumeSpecName "kube-api-access-kh7xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.018866 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b74195a9-43f5-4734-85dd-7092de0c7644" (UID: "b74195a9-43f5-4734-85dd-7092de0c7644"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.074886 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.075313 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh7xz\" (UniqueName: \"kubernetes.io/projected/b74195a9-43f5-4734-85dd-7092de0c7644-kube-api-access-kh7xz\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.075328 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.075341 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.075352 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.099642 4761 generic.go:334] "Generic (PLEG): container finished" podID="b74195a9-43f5-4734-85dd-7092de0c7644" containerID="e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd" exitCode=0 Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.099892 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerDied","Data":"e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd"} Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.099935 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerDied","Data":"0891b5bcb540cf5685e1d4a26eab5e1d7d47956c6e7321db234953d82cb51a16"} Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.099954 4761 scope.go:117] "RemoveContainer" containerID="8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.100126 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.105925 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b74195a9-43f5-4734-85dd-7092de0c7644" (UID: "b74195a9-43f5-4734-85dd-7092de0c7644"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.175801 4761 scope.go:117] "RemoveContainer" containerID="af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.175872 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-config-data" (OuterVolumeSpecName: "config-data") pod "b74195a9-43f5-4734-85dd-7092de0c7644" (UID: "b74195a9-43f5-4734-85dd-7092de0c7644"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.177748 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.177778 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.200140 4761 scope.go:117] "RemoveContainer" containerID="37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.247529 4761 scope.go:117] "RemoveContainer" containerID="e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.277234 4761 scope.go:117] "RemoveContainer" containerID="8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.278281 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19\": container with ID starting with 8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19 not found: ID does not exist" containerID="8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.278348 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19"} err="failed to get container status \"8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19\": rpc error: code = NotFound desc = could not find container \"8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19\": container with ID starting with 8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19 not found: ID does not exist" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.278380 4761 scope.go:117] "RemoveContainer" containerID="af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.281228 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623\": container with ID starting with af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623 not found: ID does not exist" containerID="af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.281262 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623"} err="failed to get container status \"af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623\": rpc error: code = NotFound desc = could not find container \"af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623\": container with ID starting with af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623 not found: ID does not exist" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.281281 4761 scope.go:117] "RemoveContainer" containerID="37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.281680 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8\": container with ID starting with 37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8 not found: ID does not exist" containerID="37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.281724 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8"} err="failed to get container status \"37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8\": rpc error: code = NotFound desc = could not find container \"37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8\": container with ID starting with 37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8 not found: ID does not exist" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.281745 4761 scope.go:117] "RemoveContainer" containerID="e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.282033 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd\": container with ID starting with e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd not found: ID does not exist" containerID="e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.282049 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd"} err="failed to get container status \"e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd\": rpc error: code = NotFound desc = could not find container \"e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd\": container with ID starting with e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd not found: ID does not exist" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.333337 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.333926 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.372635 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.385375 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.401274 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.464046 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.484957 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.550884 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.551412 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-central-agent" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551434 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-central-agent" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.551466 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="sg-core" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551473 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="sg-core" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.551516 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="proxy-httpd" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551528 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="proxy-httpd" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.551553 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-notification-agent" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551561 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-notification-agent" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.551585 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" containerName="nova-cell0-conductor-db-sync" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551592 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" containerName="nova-cell0-conductor-db-sync" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551834 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-central-agent" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551848 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" containerName="nova-cell0-conductor-db-sync" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551858 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="sg-core" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551873 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="proxy-httpd" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551888 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-notification-agent" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.553868 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.556355 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.556906 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.583254 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.623923 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-config-data\") pod \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.623999 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-scripts\") pod \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.624222 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-combined-ca-bundle\") pod \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.624277 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22jfn\" (UniqueName: \"kubernetes.io/projected/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-kube-api-access-22jfn\") pod \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.633428 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-kube-api-access-22jfn" (OuterVolumeSpecName: "kube-api-access-22jfn") pod "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" (UID: "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a"). InnerVolumeSpecName "kube-api-access-22jfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.635175 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-scripts" (OuterVolumeSpecName: "scripts") pod "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" (UID: "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.672812 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" (UID: "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.699261 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-config-data" (OuterVolumeSpecName: "config-data") pod "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" (UID: "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.726946 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727100 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727142 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-run-httpd\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727177 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9m7d\" (UniqueName: \"kubernetes.io/projected/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-kube-api-access-d9m7d\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727200 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-log-httpd\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727230 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-config-data\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727249 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-scripts\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727658 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727779 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727798 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727813 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22jfn\" (UniqueName: \"kubernetes.io/projected/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-kube-api-access-22jfn\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.830211 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.830296 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-run-httpd\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.830347 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9m7d\" (UniqueName: \"kubernetes.io/projected/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-kube-api-access-d9m7d\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.830408 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-log-httpd\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.830486 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-config-data\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.830529 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-scripts\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.830635 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.834327 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-log-httpd\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.834401 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-run-httpd\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.838406 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.839266 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-scripts\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.839388 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-config-data\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.839758 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.854535 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9m7d\" (UniqueName: \"kubernetes.io/projected/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-kube-api-access-d9m7d\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.112603 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.129803 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.132443 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7wm25" event={"ID":"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a","Type":"ContainerDied","Data":"c1acaeeaaab2096e16cc9363dfc667af3e67b34e58a27631f0cc649eeb5c7b8e"} Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.132604 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1acaeeaaab2096e16cc9363dfc667af3e67b34e58a27631f0cc649eeb5c7b8e" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.133206 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.138365 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.262061 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.264185 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.271614 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.272235 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9t8nf" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.307212 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.450152 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af14fdad-b14e-465d-bd67-6f5f89f87d45-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.450268 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8qgp\" (UniqueName: \"kubernetes.io/projected/af14fdad-b14e-465d-bd67-6f5f89f87d45-kube-api-access-t8qgp\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.450299 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af14fdad-b14e-465d-bd67-6f5f89f87d45-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.552216 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af14fdad-b14e-465d-bd67-6f5f89f87d45-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.552398 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af14fdad-b14e-465d-bd67-6f5f89f87d45-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.552526 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8qgp\" (UniqueName: \"kubernetes.io/projected/af14fdad-b14e-465d-bd67-6f5f89f87d45-kube-api-access-t8qgp\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.559096 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af14fdad-b14e-465d-bd67-6f5f89f87d45-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.560830 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af14fdad-b14e-465d-bd67-6f5f89f87d45-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.573546 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8qgp\" (UniqueName: \"kubernetes.io/projected/af14fdad-b14e-465d-bd67-6f5f89f87d45-kube-api-access-t8qgp\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.594482 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.738013 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" path="/var/lib/kubelet/pods/b74195a9-43f5-4734-85dd-7092de0c7644/volumes" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.774624 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:20 crc kubenswrapper[4761]: I0307 08:14:20.079329 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 07 08:14:20 crc kubenswrapper[4761]: I0307 08:14:20.079802 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:14:20 crc kubenswrapper[4761]: I0307 08:14:20.086435 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 07 08:14:20 crc kubenswrapper[4761]: I0307 08:14:20.116764 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 07 08:14:20 crc kubenswrapper[4761]: I0307 08:14:20.150639 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"af14fdad-b14e-465d-bd67-6f5f89f87d45","Type":"ContainerStarted","Data":"3ca60b7da36b7be65173b5b9fecac0e9014ec0f413ca152ba1290768bbd1227c"} Mar 07 08:14:20 crc kubenswrapper[4761]: I0307 08:14:20.170129 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerStarted","Data":"68d2a1e8dcf680e1682fdb45273afb6748998164a4d34d2b2c0184052f6a908e"} Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.180364 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"af14fdad-b14e-465d-bd67-6f5f89f87d45","Type":"ContainerStarted","Data":"3391824ae2f57be6af173203a6f63a6bf742d8fbc7b3bdcfda7b2381fa6a241c"} Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.181557 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.182793 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerStarted","Data":"de2ac32dbb5a41220c7a84566414f147406027696b853c7b399e69472f50858b"} Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.182817 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.182835 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.205152 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.20513017 podStartE2EDuration="2.20513017s" podCreationTimestamp="2026-03-07 08:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:21.196989487 +0000 UTC m=+1518.106155962" watchObservedRunningTime="2026-03-07 08:14:21.20513017 +0000 UTC m=+1518.114296645" Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.720542 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.757438 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:22 crc kubenswrapper[4761]: I0307 08:14:22.203878 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerStarted","Data":"831dd82cf11939dfe36196f3a5b44495796ada99b21ae36d142071463a50f01c"} Mar 07 08:14:22 crc kubenswrapper[4761]: I0307 08:14:22.204219 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerStarted","Data":"3857f525f2d3053fa6279a8b9c61e360881f39dae23ed89a286c5a0b01c84858"} Mar 07 08:14:24 crc kubenswrapper[4761]: I0307 08:14:24.688115 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:14:24 crc kubenswrapper[4761]: I0307 08:14:24.755228 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:14:24 crc kubenswrapper[4761]: I0307 08:14:24.922128 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q5jjc"] Mar 07 08:14:25 crc kubenswrapper[4761]: I0307 08:14:25.472069 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerStarted","Data":"c7fc3252256a8b72c1ee5410ff1687c7f3b5b09e36dde701bee55e8536730d4c"} Mar 07 08:14:25 crc kubenswrapper[4761]: I0307 08:14:25.472238 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:14:25 crc kubenswrapper[4761]: I0307 08:14:25.494596 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.305838258 podStartE2EDuration="7.494572941s" podCreationTimestamp="2026-03-07 08:14:18 +0000 UTC" firstStartedPulling="2026-03-07 08:14:19.778425813 +0000 UTC m=+1516.687592288" lastFinishedPulling="2026-03-07 08:14:24.967160496 +0000 UTC m=+1521.876326971" observedRunningTime="2026-03-07 08:14:25.486837568 +0000 UTC m=+1522.396004043" watchObservedRunningTime="2026-03-07 08:14:25.494572941 +0000 UTC m=+1522.403739416" Mar 07 08:14:26 crc kubenswrapper[4761]: I0307 08:14:26.483593 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" containerID="cri-o://3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964" gracePeriod=2 Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.038582 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.182489 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tblc8\" (UniqueName: \"kubernetes.io/projected/d2217e77-ce96-4ec3-9759-79f03958dc9c-kube-api-access-tblc8\") pod \"d2217e77-ce96-4ec3-9759-79f03958dc9c\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.182702 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-catalog-content\") pod \"d2217e77-ce96-4ec3-9759-79f03958dc9c\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.182884 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-utilities\") pod \"d2217e77-ce96-4ec3-9759-79f03958dc9c\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.183693 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-utilities" (OuterVolumeSpecName: "utilities") pod "d2217e77-ce96-4ec3-9759-79f03958dc9c" (UID: "d2217e77-ce96-4ec3-9759-79f03958dc9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.190070 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2217e77-ce96-4ec3-9759-79f03958dc9c-kube-api-access-tblc8" (OuterVolumeSpecName: "kube-api-access-tblc8") pod "d2217e77-ce96-4ec3-9759-79f03958dc9c" (UID: "d2217e77-ce96-4ec3-9759-79f03958dc9c"). InnerVolumeSpecName "kube-api-access-tblc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.285809 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.286067 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tblc8\" (UniqueName: \"kubernetes.io/projected/d2217e77-ce96-4ec3-9759-79f03958dc9c-kube-api-access-tblc8\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.309931 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2217e77-ce96-4ec3-9759-79f03958dc9c" (UID: "d2217e77-ce96-4ec3-9759-79f03958dc9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.388586 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.496023 4761 generic.go:334] "Generic (PLEG): container finished" podID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerID="3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964" exitCode=0 Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.496168 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5jjc" event={"ID":"d2217e77-ce96-4ec3-9759-79f03958dc9c","Type":"ContainerDied","Data":"3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964"} Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.496307 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.496362 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5jjc" event={"ID":"d2217e77-ce96-4ec3-9759-79f03958dc9c","Type":"ContainerDied","Data":"bf4ea89029ab40970ab415d2d085585802656f14ef4bd9a850650491e936c122"} Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.496438 4761 scope.go:117] "RemoveContainer" containerID="3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.540284 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q5jjc"] Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.552066 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q5jjc"] Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.567364 4761 scope.go:117] "RemoveContainer" containerID="28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.612988 4761 scope.go:117] "RemoveContainer" containerID="defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.653588 4761 scope.go:117] "RemoveContainer" containerID="3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964" Mar 07 08:14:27 crc kubenswrapper[4761]: E0307 08:14:27.654182 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964\": container with ID starting with 3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964 not found: ID does not exist" containerID="3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.654269 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964"} err="failed to get container status \"3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964\": rpc error: code = NotFound desc = could not find container \"3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964\": container with ID starting with 3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964 not found: ID does not exist" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.654302 4761 scope.go:117] "RemoveContainer" containerID="28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd" Mar 07 08:14:27 crc kubenswrapper[4761]: E0307 08:14:27.654810 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd\": container with ID starting with 28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd not found: ID does not exist" containerID="28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.654850 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd"} err="failed to get container status \"28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd\": rpc error: code = NotFound desc = could not find container \"28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd\": container with ID starting with 28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd not found: ID does not exist" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.654900 4761 scope.go:117] "RemoveContainer" containerID="defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593" Mar 07 08:14:27 crc kubenswrapper[4761]: E0307 08:14:27.655280 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593\": container with ID starting with defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593 not found: ID does not exist" containerID="defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.655326 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593"} err="failed to get container status \"defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593\": rpc error: code = NotFound desc = could not find container \"defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593\": container with ID starting with defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593 not found: ID does not exist" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.725180 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" path="/var/lib/kubelet/pods/d2217e77-ce96-4ec3-9759-79f03958dc9c/volumes" Mar 07 08:14:29 crc kubenswrapper[4761]: I0307 08:14:29.636640 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.452589 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-hg9sm"] Mar 07 08:14:30 crc kubenswrapper[4761]: E0307 08:14:30.453207 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.453232 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" Mar 07 08:14:30 crc kubenswrapper[4761]: E0307 08:14:30.453276 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="extract-content" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.453286 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="extract-content" Mar 07 08:14:30 crc kubenswrapper[4761]: E0307 08:14:30.453302 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="extract-utilities" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.453311 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="extract-utilities" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.453546 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.456047 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.458653 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.464169 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.469860 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hg9sm"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.563530 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlwcd\" (UniqueName: \"kubernetes.io/projected/2dac6b04-d81b-43a0-8b71-ebaa8842366d-kube-api-access-jlwcd\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.563876 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-config-data\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.563904 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-scripts\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.564101 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.612819 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.614812 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.623128 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.631521 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667683 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlwcd\" (UniqueName: \"kubernetes.io/projected/2dac6b04-d81b-43a0-8b71-ebaa8842366d-kube-api-access-jlwcd\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667728 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-config-data\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667765 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-scripts\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667791 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e64fc67d-9589-470f-bac1-53ab06ccf63a-logs\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667838 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667863 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-config-data\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667897 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4twrn\" (UniqueName: \"kubernetes.io/projected/e64fc67d-9589-470f-bac1-53ab06ccf63a-kube-api-access-4twrn\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667953 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.710516 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-config-data\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.725525 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.742294 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.744266 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.747462 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-scripts\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.749633 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlwcd\" (UniqueName: \"kubernetes.io/projected/2dac6b04-d81b-43a0-8b71-ebaa8842366d-kube-api-access-jlwcd\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.756540 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.783826 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.786178 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.795409 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.796980 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.800821 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.801295 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.801389 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-config-data\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.801414 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-logs\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.801520 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4twrn\" (UniqueName: \"kubernetes.io/projected/e64fc67d-9589-470f-bac1-53ab06ccf63a-kube-api-access-4twrn\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.801641 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-config-data\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.801662 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.801861 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzn54\" (UniqueName: \"kubernetes.io/projected/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-kube-api-access-qzn54\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.802139 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e64fc67d-9589-470f-bac1-53ab06ccf63a-logs\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.802587 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e64fc67d-9589-470f-bac1-53ab06ccf63a-logs\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.827495 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-config-data\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.831352 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.835104 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.850382 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4twrn\" (UniqueName: \"kubernetes.io/projected/e64fc67d-9589-470f-bac1-53ab06ccf63a-kube-api-access-4twrn\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.899799 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.901234 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.907916 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909197 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-config-data\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909262 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-logs\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909319 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-config-data\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909339 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909397 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzn54\" (UniqueName: \"kubernetes.io/projected/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-kube-api-access-qzn54\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909435 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909488 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt6kp\" (UniqueName: \"kubernetes.io/projected/9261fde2-342e-4a37-b8f9-f6715d09b003-kube-api-access-gt6kp\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909814 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-logs\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.918256 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-config-data\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.920722 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.940709 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.941306 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.944766 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzn54\" (UniqueName: \"kubernetes.io/projected/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-kube-api-access-qzn54\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.964811 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vdbwn"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.966815 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.993251 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vdbwn"] Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.024892 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rlfs\" (UniqueName: \"kubernetes.io/projected/1cceca9f-0dae-4298-b495-2c2e09e6e63d-kube-api-access-9rlfs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025022 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025073 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025192 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt6kp\" (UniqueName: \"kubernetes.io/projected/9261fde2-342e-4a37-b8f9-f6715d09b003-kube-api-access-gt6kp\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025231 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-svc\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025273 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzpw7\" (UniqueName: \"kubernetes.io/projected/bc9f56df-ded6-4d8a-8075-645d640f6b5f-kube-api-access-xzpw7\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025384 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025417 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-config-data\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025463 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.026307 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-config\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.026517 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.026566 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.035510 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.042004 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.060725 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-config-data\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.088303 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt6kp\" (UniqueName: \"kubernetes.io/projected/9261fde2-342e-4a37-b8f9-f6715d09b003-kube-api-access-gt6kp\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128298 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128338 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128374 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rlfs\" (UniqueName: \"kubernetes.io/projected/1cceca9f-0dae-4298-b495-2c2e09e6e63d-kube-api-access-9rlfs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128438 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128501 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-svc\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128527 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzpw7\" (UniqueName: \"kubernetes.io/projected/bc9f56df-ded6-4d8a-8075-645d640f6b5f-kube-api-access-xzpw7\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128575 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128612 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128634 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-config\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.129685 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-config\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.131439 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-svc\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.131973 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.132958 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.144419 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.144654 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.163359 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.172951 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rlfs\" (UniqueName: \"kubernetes.io/projected/1cceca9f-0dae-4298-b495-2c2e09e6e63d-kube-api-access-9rlfs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.189196 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzpw7\" (UniqueName: \"kubernetes.io/projected/bc9f56df-ded6-4d8a-8075-645d640f6b5f-kube-api-access-xzpw7\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.343066 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.414686 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.437388 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.618829 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hg9sm"] Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.650021 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.928125 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.132329 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.152418 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:14:32 crc kubenswrapper[4761]: W0307 08:14:32.217802 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cceca9f_0dae_4298_b495_2c2e09e6e63d.slice/crio-92a1a74929d44f1124508269aca2a63594ce2385d95da027c7837c515cc36b31 WatchSource:0}: Error finding container 92a1a74929d44f1124508269aca2a63594ce2385d95da027c7837c515cc36b31: Status 404 returned error can't find the container with id 92a1a74929d44f1124508269aca2a63594ce2385d95da027c7837c515cc36b31 Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.319322 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vdbwn"] Mar 07 08:14:32 crc kubenswrapper[4761]: W0307 08:14:32.324151 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc9f56df_ded6_4d8a_8075_645d640f6b5f.slice/crio-1266d5c0ea10cbc1c7d4f4cb004228b7709c180fef5b782ffe5b05b34b696ff6 WatchSource:0}: Error finding container 1266d5c0ea10cbc1c7d4f4cb004228b7709c180fef5b782ffe5b05b34b696ff6: Status 404 returned error can't find the container with id 1266d5c0ea10cbc1c7d4f4cb004228b7709c180fef5b782ffe5b05b34b696ff6 Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.597150 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9261fde2-342e-4a37-b8f9-f6715d09b003","Type":"ContainerStarted","Data":"c26fcbe9ab1e6a3c13a0c0ab87a0dcb9733d543c9e555bcd15e6fdc735b44d88"} Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.601486 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1cceca9f-0dae-4298-b495-2c2e09e6e63d","Type":"ContainerStarted","Data":"92a1a74929d44f1124508269aca2a63594ce2385d95da027c7837c515cc36b31"} Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.611654 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hg9sm" event={"ID":"2dac6b04-d81b-43a0-8b71-ebaa8842366d","Type":"ContainerStarted","Data":"d5cb7aba8024010ea4f617e523acc80542873eaac8bf9f18735f631a8f629246"} Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.611699 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hg9sm" event={"ID":"2dac6b04-d81b-43a0-8b71-ebaa8842366d","Type":"ContainerStarted","Data":"d9237cf1801de0d12b1730f2f31210aefaa0b8081e17380a565b5d30e41dac2e"} Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.616949 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" event={"ID":"bc9f56df-ded6-4d8a-8075-645d640f6b5f","Type":"ContainerStarted","Data":"1266d5c0ea10cbc1c7d4f4cb004228b7709c180fef5b782ffe5b05b34b696ff6"} Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.622704 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec","Type":"ContainerStarted","Data":"e518df2df4996932b54f07b70e7e0bcbd06f7f0a7c41e95856dd1ae95ae3e660"} Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.624310 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e64fc67d-9589-470f-bac1-53ab06ccf63a","Type":"ContainerStarted","Data":"d29b0f6f71bec0433060335fb0e11ffa4d4c536c6966ca333d10fff1bcacec70"} Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.638377 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-hg9sm" podStartSLOduration=2.638361871 podStartE2EDuration="2.638361871s" podCreationTimestamp="2026-03-07 08:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:32.63353211 +0000 UTC m=+1529.542698575" watchObservedRunningTime="2026-03-07 08:14:32.638361871 +0000 UTC m=+1529.547528346" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.792795 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hwxg9"] Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.794376 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.797824 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.798037 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.811580 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hwxg9"] Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.894916 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmw55\" (UniqueName: \"kubernetes.io/projected/4931aa42-2c29-4ec8-ba24-e90210ad1aca-kube-api-access-rmw55\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.894980 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-scripts\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.895258 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.895373 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-config-data\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.997664 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.997744 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-config-data\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.997878 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmw55\" (UniqueName: \"kubernetes.io/projected/4931aa42-2c29-4ec8-ba24-e90210ad1aca-kube-api-access-rmw55\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.997907 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-scripts\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.005766 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.006960 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-scripts\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.015367 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-config-data\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.025127 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmw55\" (UniqueName: \"kubernetes.io/projected/4931aa42-2c29-4ec8-ba24-e90210ad1aca-kube-api-access-rmw55\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.175318 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.638784 4761 generic.go:334] "Generic (PLEG): container finished" podID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerID="3ada3d87e766661465ef73a62b9cc99eb8c306100d63ce1c417917c314038b0c" exitCode=0 Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.638935 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" event={"ID":"bc9f56df-ded6-4d8a-8075-645d640f6b5f","Type":"ContainerDied","Data":"3ada3d87e766661465ef73a62b9cc99eb8c306100d63ce1c417917c314038b0c"} Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.815512 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hwxg9"] Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.414454 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.446233 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.494527 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.494893 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-central-agent" containerID="cri-o://de2ac32dbb5a41220c7a84566414f147406027696b853c7b399e69472f50858b" gracePeriod=30 Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.494932 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="sg-core" containerID="cri-o://831dd82cf11939dfe36196f3a5b44495796ada99b21ae36d142071463a50f01c" gracePeriod=30 Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.494988 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="proxy-httpd" containerID="cri-o://c7fc3252256a8b72c1ee5410ff1687c7f3b5b09e36dde701bee55e8536730d4c" gracePeriod=30 Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.494969 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-notification-agent" containerID="cri-o://3857f525f2d3053fa6279a8b9c61e360881f39dae23ed89a286c5a0b01c84858" gracePeriod=30 Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.656672 4761 generic.go:334] "Generic (PLEG): container finished" podID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerID="831dd82cf11939dfe36196f3a5b44495796ada99b21ae36d142071463a50f01c" exitCode=2 Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.656767 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerDied","Data":"831dd82cf11939dfe36196f3a5b44495796ada99b21ae36d142071463a50f01c"} Mar 07 08:14:35 crc kubenswrapper[4761]: I0307 08:14:35.681960 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" event={"ID":"4931aa42-2c29-4ec8-ba24-e90210ad1aca","Type":"ContainerStarted","Data":"cdcc6626262cfb72659a9389831ee3c44de2130d63ecb8bbea3fe4800a68c1f1"} Mar 07 08:14:35 crc kubenswrapper[4761]: I0307 08:14:35.685941 4761 generic.go:334] "Generic (PLEG): container finished" podID="19b5d822-117e-4890-9ef2-6e75fc9a5c98" containerID="d4664c58f260536a81211c969a35f89ac9977c97d2b99db0a4bb205c039801d8" exitCode=137 Mar 07 08:14:35 crc kubenswrapper[4761]: I0307 08:14:35.685999 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f94956c9f-xbq22" event={"ID":"19b5d822-117e-4890-9ef2-6e75fc9a5c98","Type":"ContainerDied","Data":"d4664c58f260536a81211c969a35f89ac9977c97d2b99db0a4bb205c039801d8"} Mar 07 08:14:35 crc kubenswrapper[4761]: I0307 08:14:35.691516 4761 generic.go:334] "Generic (PLEG): container finished" podID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerID="c7fc3252256a8b72c1ee5410ff1687c7f3b5b09e36dde701bee55e8536730d4c" exitCode=0 Mar 07 08:14:35 crc kubenswrapper[4761]: I0307 08:14:35.691549 4761 generic.go:334] "Generic (PLEG): container finished" podID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerID="de2ac32dbb5a41220c7a84566414f147406027696b853c7b399e69472f50858b" exitCode=0 Mar 07 08:14:35 crc kubenswrapper[4761]: I0307 08:14:35.691573 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerDied","Data":"c7fc3252256a8b72c1ee5410ff1687c7f3b5b09e36dde701bee55e8536730d4c"} Mar 07 08:14:35 crc kubenswrapper[4761]: I0307 08:14:35.691603 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerDied","Data":"de2ac32dbb5a41220c7a84566414f147406027696b853c7b399e69472f50858b"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.716782 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" event={"ID":"4931aa42-2c29-4ec8-ba24-e90210ad1aca","Type":"ContainerStarted","Data":"00517bee769197b1cd470a476b898df7ad9f81d3ab127b1e7dddf7ed79e2908b"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.722215 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f94956c9f-xbq22" event={"ID":"19b5d822-117e-4890-9ef2-6e75fc9a5c98","Type":"ContainerDied","Data":"389c6e57a5ddd4c58896f736a625938e5c1131cab1dad08fa30ca3830ba2988c"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.722262 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="389c6e57a5ddd4c58896f736a625938e5c1131cab1dad08fa30ca3830ba2988c" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.734467 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e64fc67d-9589-470f-bac1-53ab06ccf63a","Type":"ContainerStarted","Data":"58dee734df2c2e42b401af896e07c889c430ed2732c73baf08dc81398684e967"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.740969 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9261fde2-342e-4a37-b8f9-f6715d09b003","Type":"ContainerStarted","Data":"19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.764819 4761 generic.go:334] "Generic (PLEG): container finished" podID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerID="3857f525f2d3053fa6279a8b9c61e360881f39dae23ed89a286c5a0b01c84858" exitCode=0 Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.764901 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerDied","Data":"3857f525f2d3053fa6279a8b9c61e360881f39dae23ed89a286c5a0b01c84858"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.770103 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" event={"ID":"bc9f56df-ded6-4d8a-8075-645d640f6b5f","Type":"ContainerStarted","Data":"58a897c9e680fbc80c648ba02291c7e229e45e6c318e29a514b872744bbb65c0"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.770547 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" podStartSLOduration=4.770527049 podStartE2EDuration="4.770527049s" podCreationTimestamp="2026-03-07 08:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:36.731512026 +0000 UTC m=+1533.640678521" watchObservedRunningTime="2026-03-07 08:14:36.770527049 +0000 UTC m=+1533.679693524" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.771773 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.775653 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec","Type":"ContainerStarted","Data":"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.776199 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.095026101 podStartE2EDuration="6.7761869s" podCreationTimestamp="2026-03-07 08:14:30 +0000 UTC" firstStartedPulling="2026-03-07 08:14:32.170986523 +0000 UTC m=+1529.080152998" lastFinishedPulling="2026-03-07 08:14:35.852147322 +0000 UTC m=+1532.761313797" observedRunningTime="2026-03-07 08:14:36.764681403 +0000 UTC m=+1533.673847878" watchObservedRunningTime="2026-03-07 08:14:36.7761869 +0000 UTC m=+1533.685353375" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.807702 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" podStartSLOduration=6.807684866 podStartE2EDuration="6.807684866s" podCreationTimestamp="2026-03-07 08:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:36.788859066 +0000 UTC m=+1533.698025561" watchObservedRunningTime="2026-03-07 08:14:36.807684866 +0000 UTC m=+1533.716851341" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.824202 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.880072 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930460 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-log-httpd\") pod \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930502 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-run-httpd\") pod \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930595 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data\") pod \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930611 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-sg-core-conf-yaml\") pod \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930634 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljfgw\" (UniqueName: \"kubernetes.io/projected/19b5d822-117e-4890-9ef2-6e75fc9a5c98-kube-api-access-ljfgw\") pod \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930668 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data-custom\") pod \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930704 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9m7d\" (UniqueName: \"kubernetes.io/projected/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-kube-api-access-d9m7d\") pod \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930860 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-config-data\") pod \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930895 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-combined-ca-bundle\") pod \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930992 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-combined-ca-bundle\") pod \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.931081 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-scripts\") pod \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.932166 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" (UID: "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.932449 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" (UID: "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.954958 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "19b5d822-117e-4890-9ef2-6e75fc9a5c98" (UID: "19b5d822-117e-4890-9ef2-6e75fc9a5c98"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.957156 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-scripts" (OuterVolumeSpecName: "scripts") pod "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" (UID: "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.961146 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-kube-api-access-d9m7d" (OuterVolumeSpecName: "kube-api-access-d9m7d") pod "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" (UID: "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1"). InnerVolumeSpecName "kube-api-access-d9m7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.967071 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19b5d822-117e-4890-9ef2-6e75fc9a5c98-kube-api-access-ljfgw" (OuterVolumeSpecName: "kube-api-access-ljfgw") pod "19b5d822-117e-4890-9ef2-6e75fc9a5c98" (UID: "19b5d822-117e-4890-9ef2-6e75fc9a5c98"). InnerVolumeSpecName "kube-api-access-ljfgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.017720 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19b5d822-117e-4890-9ef2-6e75fc9a5c98" (UID: "19b5d822-117e-4890-9ef2-6e75fc9a5c98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.046504 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.046541 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.046550 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.046561 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.046570 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljfgw\" (UniqueName: \"kubernetes.io/projected/19b5d822-117e-4890-9ef2-6e75fc9a5c98-kube-api-access-ljfgw\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.046581 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.046592 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9m7d\" (UniqueName: \"kubernetes.io/projected/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-kube-api-access-d9m7d\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.049551 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" (UID: "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.097468 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data" (OuterVolumeSpecName: "config-data") pod "19b5d822-117e-4890-9ef2-6e75fc9a5c98" (UID: "19b5d822-117e-4890-9ef2-6e75fc9a5c98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.108799 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" (UID: "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.149697 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.149769 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.149783 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.189056 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-config-data" (OuterVolumeSpecName: "config-data") pod "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" (UID: "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.252525 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.794212 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec","Type":"ContainerStarted","Data":"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c"} Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.794363 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-log" containerID="cri-o://185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339" gracePeriod=30 Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.794668 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-metadata" containerID="cri-o://bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c" gracePeriod=30 Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.796550 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e64fc67d-9589-470f-bac1-53ab06ccf63a","Type":"ContainerStarted","Data":"ac8a691cc2fd6e47a8c926f63560099be8b63de821acd956efaa772c91cc8f15"} Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.803569 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1cceca9f-0dae-4298-b495-2c2e09e6e63d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65" gracePeriod=30 Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.803663 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1cceca9f-0dae-4298-b495-2c2e09e6e63d","Type":"ContainerStarted","Data":"f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65"} Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.832645 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.834068 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerDied","Data":"68d2a1e8dcf680e1682fdb45273afb6748998164a4d34d2b2c0184052f6a908e"} Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.834120 4761 scope.go:117] "RemoveContainer" containerID="c7fc3252256a8b72c1ee5410ff1687c7f3b5b09e36dde701bee55e8536730d4c" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.834300 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.845802 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.9333349010000003 podStartE2EDuration="7.845764438s" podCreationTimestamp="2026-03-07 08:14:30 +0000 UTC" firstStartedPulling="2026-03-07 08:14:31.944274198 +0000 UTC m=+1528.853440673" lastFinishedPulling="2026-03-07 08:14:35.856703725 +0000 UTC m=+1532.765870210" observedRunningTime="2026-03-07 08:14:37.814198171 +0000 UTC m=+1534.723364646" watchObservedRunningTime="2026-03-07 08:14:37.845764438 +0000 UTC m=+1534.754930923" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.861604 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.647039059 podStartE2EDuration="7.860529056s" podCreationTimestamp="2026-03-07 08:14:30 +0000 UTC" firstStartedPulling="2026-03-07 08:14:31.637926467 +0000 UTC m=+1528.547092942" lastFinishedPulling="2026-03-07 08:14:35.851416464 +0000 UTC m=+1532.760582939" observedRunningTime="2026-03-07 08:14:37.844426035 +0000 UTC m=+1534.753592520" watchObservedRunningTime="2026-03-07 08:14:37.860529056 +0000 UTC m=+1534.769695541" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.877539 4761 scope.go:117] "RemoveContainer" containerID="831dd82cf11939dfe36196f3a5b44495796ada99b21ae36d142071463a50f01c" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.913821 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.294251483 podStartE2EDuration="7.913673022s" podCreationTimestamp="2026-03-07 08:14:30 +0000 UTC" firstStartedPulling="2026-03-07 08:14:32.232080587 +0000 UTC m=+1529.141247062" lastFinishedPulling="2026-03-07 08:14:35.851502126 +0000 UTC m=+1532.760668601" observedRunningTime="2026-03-07 08:14:37.863051089 +0000 UTC m=+1534.772217574" watchObservedRunningTime="2026-03-07 08:14:37.913673022 +0000 UTC m=+1534.822839497" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.936121 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.948330 4761 scope.go:117] "RemoveContainer" containerID="3857f525f2d3053fa6279a8b9c61e360881f39dae23ed89a286c5a0b01c84858" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.955762 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.972024 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f94956c9f-xbq22"] Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.985797 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:37 crc kubenswrapper[4761]: E0307 08:14:37.986310 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19b5d822-117e-4890-9ef2-6e75fc9a5c98" containerName="heat-api" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986333 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19b5d822-117e-4890-9ef2-6e75fc9a5c98" containerName="heat-api" Mar 07 08:14:37 crc kubenswrapper[4761]: E0307 08:14:37.986360 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="sg-core" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986366 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="sg-core" Mar 07 08:14:37 crc kubenswrapper[4761]: E0307 08:14:37.986397 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-notification-agent" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986404 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-notification-agent" Mar 07 08:14:37 crc kubenswrapper[4761]: E0307 08:14:37.986414 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="proxy-httpd" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986420 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="proxy-httpd" Mar 07 08:14:37 crc kubenswrapper[4761]: E0307 08:14:37.986427 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-central-agent" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986435 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-central-agent" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986633 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19b5d822-117e-4890-9ef2-6e75fc9a5c98" containerName="heat-api" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986666 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-notification-agent" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986676 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="sg-core" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986686 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="proxy-httpd" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986702 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-central-agent" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.988788 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.991617 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.991871 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.012423 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6f94956c9f-xbq22"] Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.033790 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.075391 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-log-httpd\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.075741 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-config-data\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.075766 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.075789 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.075838 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-scripts\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.075881 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkszg\" (UniqueName: \"kubernetes.io/projected/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-kube-api-access-zkszg\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.075937 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-run-httpd\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.162072 4761 scope.go:117] "RemoveContainer" containerID="de2ac32dbb5a41220c7a84566414f147406027696b853c7b399e69472f50858b" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.179991 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-run-httpd\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.180183 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-log-httpd\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.180286 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-config-data\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.180311 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.180342 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.180413 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-scripts\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.180477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkszg\" (UniqueName: \"kubernetes.io/projected/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-kube-api-access-zkszg\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.181741 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-run-httpd\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.182017 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-log-httpd\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.188045 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-scripts\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.191293 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.201513 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkszg\" (UniqueName: \"kubernetes.io/projected/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-kube-api-access-zkszg\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.201923 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-config-data\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.202034 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.469123 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.642087 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.693305 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-logs\") pod \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.693771 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-combined-ca-bundle\") pod \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.694597 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-config-data\") pod \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.694735 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzn54\" (UniqueName: \"kubernetes.io/projected/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-kube-api-access-qzn54\") pod \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.701431 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-kube-api-access-qzn54" (OuterVolumeSpecName: "kube-api-access-qzn54") pod "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" (UID: "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec"). InnerVolumeSpecName "kube-api-access-qzn54". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.701972 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-logs" (OuterVolumeSpecName: "logs") pod "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" (UID: "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.744644 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" (UID: "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.788166 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-config-data" (OuterVolumeSpecName: "config-data") pod "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" (UID: "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.798473 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzn54\" (UniqueName: \"kubernetes.io/projected/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-kube-api-access-qzn54\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.798510 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.798523 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.798534 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.866715 4761 generic.go:334] "Generic (PLEG): container finished" podID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerID="bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c" exitCode=0 Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.866765 4761 generic.go:334] "Generic (PLEG): container finished" podID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerID="185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339" exitCode=143 Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.868079 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.868987 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec","Type":"ContainerDied","Data":"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c"} Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.869055 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec","Type":"ContainerDied","Data":"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339"} Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.869073 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec","Type":"ContainerDied","Data":"e518df2df4996932b54f07b70e7e0bcbd06f7f0a7c41e95856dd1ae95ae3e660"} Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.869092 4761 scope.go:117] "RemoveContainer" containerID="bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.908076 4761 scope.go:117] "RemoveContainer" containerID="185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.953837 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.965418 4761 scope.go:117] "RemoveContainer" containerID="bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c" Mar 07 08:14:38 crc kubenswrapper[4761]: E0307 08:14:38.965951 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c\": container with ID starting with bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c not found: ID does not exist" containerID="bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.966051 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c"} err="failed to get container status \"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c\": rpc error: code = NotFound desc = could not find container \"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c\": container with ID starting with bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c not found: ID does not exist" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.966175 4761 scope.go:117] "RemoveContainer" containerID="185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339" Mar 07 08:14:38 crc kubenswrapper[4761]: E0307 08:14:38.966568 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339\": container with ID starting with 185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339 not found: ID does not exist" containerID="185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.966605 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339"} err="failed to get container status \"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339\": rpc error: code = NotFound desc = could not find container \"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339\": container with ID starting with 185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339 not found: ID does not exist" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.966630 4761 scope.go:117] "RemoveContainer" containerID="bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.967462 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c"} err="failed to get container status \"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c\": rpc error: code = NotFound desc = could not find container \"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c\": container with ID starting with bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c not found: ID does not exist" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.967487 4761 scope.go:117] "RemoveContainer" containerID="185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.967830 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339"} err="failed to get container status \"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339\": rpc error: code = NotFound desc = could not find container \"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339\": container with ID starting with 185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339 not found: ID does not exist" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.976706 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.992976 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:38 crc kubenswrapper[4761]: E0307 08:14:38.993587 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-log" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.993611 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-log" Mar 07 08:14:38 crc kubenswrapper[4761]: E0307 08:14:38.993630 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-metadata" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.993640 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-metadata" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.994001 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-metadata" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.994044 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-log" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.002504 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.005740 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.005867 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.034050 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:39 crc kubenswrapper[4761]: W0307 08:14:39.039797 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4 WatchSource:0}: Error finding container 44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4: Status 404 returned error can't find the container with id 44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4 Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.061009 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.116526 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpx5r\" (UniqueName: \"kubernetes.io/projected/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-kube-api-access-fpx5r\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.116603 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.116681 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.116742 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-config-data\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.116782 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-logs\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.219001 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpx5r\" (UniqueName: \"kubernetes.io/projected/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-kube-api-access-fpx5r\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.219363 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.219453 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.219508 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-config-data\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.219556 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-logs\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.219970 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-logs\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.225006 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.225559 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.225698 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-config-data\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.237893 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpx5r\" (UniqueName: \"kubernetes.io/projected/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-kube-api-access-fpx5r\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.337606 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.719632 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" path="/var/lib/kubelet/pods/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1/volumes" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.721218 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19b5d822-117e-4890-9ef2-6e75fc9a5c98" path="/var/lib/kubelet/pods/19b5d822-117e-4890-9ef2-6e75fc9a5c98/volumes" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.722181 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" path="/var/lib/kubelet/pods/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec/volumes" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.827303 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:39 crc kubenswrapper[4761]: W0307 08:14:39.831576 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f74cec5_c8cb_43f4_97a2_6eb7f4f517b6.slice/crio-23c91c1a68a575c6189c0620e46750852b0c3776c0d68333b0b91ac946bc68e6 WatchSource:0}: Error finding container 23c91c1a68a575c6189c0620e46750852b0c3776c0d68333b0b91ac946bc68e6: Status 404 returned error can't find the container with id 23c91c1a68a575c6189c0620e46750852b0c3776c0d68333b0b91ac946bc68e6 Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.878243 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerStarted","Data":"69b4bf84dd39df6b7d9b398c110264cd806fbdc4859293bf644ef1767167f6e9"} Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.878473 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerStarted","Data":"44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4"} Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.879238 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6","Type":"ContainerStarted","Data":"23c91c1a68a575c6189c0620e46750852b0c3776c0d68333b0b91ac946bc68e6"} Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.473514 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-ddvxb"] Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.475358 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.552316 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-77e3-account-create-update-8b9pf"] Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.553923 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.561069 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.567287 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-operator-scripts\") pod \"aodh-db-create-ddvxb\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.568787 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsbz4\" (UniqueName: \"kubernetes.io/projected/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-kube-api-access-xsbz4\") pod \"aodh-db-create-ddvxb\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.570473 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-ddvxb"] Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.585700 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-77e3-account-create-update-8b9pf"] Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.671842 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rsq9\" (UniqueName: \"kubernetes.io/projected/130238c4-fadf-46e2-a802-0608b83ec9a2-kube-api-access-2rsq9\") pod \"aodh-77e3-account-create-update-8b9pf\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.672046 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsbz4\" (UniqueName: \"kubernetes.io/projected/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-kube-api-access-xsbz4\") pod \"aodh-db-create-ddvxb\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.672157 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/130238c4-fadf-46e2-a802-0608b83ec9a2-operator-scripts\") pod \"aodh-77e3-account-create-update-8b9pf\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.672222 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-operator-scripts\") pod \"aodh-db-create-ddvxb\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.673241 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-operator-scripts\") pod \"aodh-db-create-ddvxb\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.691000 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsbz4\" (UniqueName: \"kubernetes.io/projected/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-kube-api-access-xsbz4\") pod \"aodh-db-create-ddvxb\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.752960 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.781003 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rsq9\" (UniqueName: \"kubernetes.io/projected/130238c4-fadf-46e2-a802-0608b83ec9a2-kube-api-access-2rsq9\") pod \"aodh-77e3-account-create-update-8b9pf\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.781599 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/130238c4-fadf-46e2-a802-0608b83ec9a2-operator-scripts\") pod \"aodh-77e3-account-create-update-8b9pf\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.782849 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/130238c4-fadf-46e2-a802-0608b83ec9a2-operator-scripts\") pod \"aodh-77e3-account-create-update-8b9pf\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.801198 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rsq9\" (UniqueName: \"kubernetes.io/projected/130238c4-fadf-46e2-a802-0608b83ec9a2-kube-api-access-2rsq9\") pod \"aodh-77e3-account-create-update-8b9pf\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.919273 4761 generic.go:334] "Generic (PLEG): container finished" podID="2dac6b04-d81b-43a0-8b71-ebaa8842366d" containerID="d5cb7aba8024010ea4f617e523acc80542873eaac8bf9f18735f631a8f629246" exitCode=0 Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.919462 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hg9sm" event={"ID":"2dac6b04-d81b-43a0-8b71-ebaa8842366d","Type":"ContainerDied","Data":"d5cb7aba8024010ea4f617e523acc80542873eaac8bf9f18735f631a8f629246"} Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.925381 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerStarted","Data":"9fbf4f9d40a0ec24b8dea09bb5d46ee8c49f0582f2fa196ad53b3fa0be0e0a4f"} Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.928085 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6","Type":"ContainerStarted","Data":"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660"} Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.928707 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6","Type":"ContainerStarted","Data":"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda"} Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.945314 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.945356 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.992151 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.992098537 podStartE2EDuration="2.992098537s" podCreationTimestamp="2026-03-07 08:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:40.963511604 +0000 UTC m=+1537.872678079" watchObservedRunningTime="2026-03-07 08:14:40.992098537 +0000 UTC m=+1537.901265012" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.082320 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.344045 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.344082 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.388892 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.417229 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.419488 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-ddvxb"] Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.444854 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.583069 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-965pw"] Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.583369 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" podUID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerName="dnsmasq-dns" containerID="cri-o://b8012e41217590ca3360af9b406c062750b0e98b8b0bc957f29f8f2fff4b4956" gracePeriod=10 Mar 07 08:14:41 crc kubenswrapper[4761]: W0307 08:14:41.739276 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod130238c4_fadf_46e2_a802_0608b83ec9a2.slice/crio-49121ea378caaf9cc11422f1c3dd400f8b1339e25b96b77511d70d331664207b WatchSource:0}: Error finding container 49121ea378caaf9cc11422f1c3dd400f8b1339e25b96b77511d70d331664207b: Status 404 returned error can't find the container with id 49121ea378caaf9cc11422f1c3dd400f8b1339e25b96b77511d70d331664207b Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.747210 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-77e3-account-create-update-8b9pf"] Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.952623 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-ddvxb" event={"ID":"2f7b5d35-c686-46fe-9e07-8f95cba61e5b","Type":"ContainerStarted","Data":"0110377348c876298de2a975b96c3aa38816fbe347c2c173440876eba190ce3d"} Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.952996 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-ddvxb" event={"ID":"2f7b5d35-c686-46fe-9e07-8f95cba61e5b","Type":"ContainerStarted","Data":"0be242e1b36456bf1a485bd4da7dd293d8757e8a4240d92472eda354490ffbfd"} Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.976215 4761 generic.go:334] "Generic (PLEG): container finished" podID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerID="b8012e41217590ca3360af9b406c062750b0e98b8b0bc957f29f8f2fff4b4956" exitCode=0 Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.976288 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" event={"ID":"3fa2e962-e967-40fc-b5e5-4ae20c68a139","Type":"ContainerDied","Data":"b8012e41217590ca3360af9b406c062750b0e98b8b0bc957f29f8f2fff4b4956"} Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.986394 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-ddvxb" podStartSLOduration=1.9863680270000001 podStartE2EDuration="1.986368027s" podCreationTimestamp="2026-03-07 08:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:41.973872375 +0000 UTC m=+1538.883038850" watchObservedRunningTime="2026-03-07 08:14:41.986368027 +0000 UTC m=+1538.895534522" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.988565 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-77e3-account-create-update-8b9pf" event={"ID":"130238c4-fadf-46e2-a802-0608b83ec9a2","Type":"ContainerStarted","Data":"49121ea378caaf9cc11422f1c3dd400f8b1339e25b96b77511d70d331664207b"} Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.003347 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerStarted","Data":"178b4b5a5b5a97c98c5a01eeef66b7b962e2bb8a1f3fd5c70b486b42f553a81f"} Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.028141 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.246:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.028549 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.246:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.100978 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.259419 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.352037 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9slj\" (UniqueName: \"kubernetes.io/projected/3fa2e962-e967-40fc-b5e5-4ae20c68a139-kube-api-access-m9slj\") pod \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.352124 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-swift-storage-0\") pod \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.352147 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-sb\") pod \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.352207 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-svc\") pod \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.352256 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-config\") pod \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.352392 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-nb\") pod \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.395958 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fa2e962-e967-40fc-b5e5-4ae20c68a139-kube-api-access-m9slj" (OuterVolumeSpecName: "kube-api-access-m9slj") pod "3fa2e962-e967-40fc-b5e5-4ae20c68a139" (UID: "3fa2e962-e967-40fc-b5e5-4ae20c68a139"). InnerVolumeSpecName "kube-api-access-m9slj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.476884 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9slj\" (UniqueName: \"kubernetes.io/projected/3fa2e962-e967-40fc-b5e5-4ae20c68a139-kube-api-access-m9slj\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.670982 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-config" (OuterVolumeSpecName: "config") pod "3fa2e962-e967-40fc-b5e5-4ae20c68a139" (UID: "3fa2e962-e967-40fc-b5e5-4ae20c68a139"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.671367 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3fa2e962-e967-40fc-b5e5-4ae20c68a139" (UID: "3fa2e962-e967-40fc-b5e5-4ae20c68a139"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.672913 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3fa2e962-e967-40fc-b5e5-4ae20c68a139" (UID: "3fa2e962-e967-40fc-b5e5-4ae20c68a139"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.682972 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3fa2e962-e967-40fc-b5e5-4ae20c68a139" (UID: "3fa2e962-e967-40fc-b5e5-4ae20c68a139"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.685498 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.685535 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.685547 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.685560 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.692509 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3fa2e962-e967-40fc-b5e5-4ae20c68a139" (UID: "3fa2e962-e967-40fc-b5e5-4ae20c68a139"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.763600 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.787929 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-config-data\") pod \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.788051 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-combined-ca-bundle\") pod \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.788249 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-scripts\") pod \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.788300 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlwcd\" (UniqueName: \"kubernetes.io/projected/2dac6b04-d81b-43a0-8b71-ebaa8842366d-kube-api-access-jlwcd\") pod \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.788911 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.795816 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-scripts" (OuterVolumeSpecName: "scripts") pod "2dac6b04-d81b-43a0-8b71-ebaa8842366d" (UID: "2dac6b04-d81b-43a0-8b71-ebaa8842366d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.804026 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dac6b04-d81b-43a0-8b71-ebaa8842366d-kube-api-access-jlwcd" (OuterVolumeSpecName: "kube-api-access-jlwcd") pod "2dac6b04-d81b-43a0-8b71-ebaa8842366d" (UID: "2dac6b04-d81b-43a0-8b71-ebaa8842366d"). InnerVolumeSpecName "kube-api-access-jlwcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.820695 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dac6b04-d81b-43a0-8b71-ebaa8842366d" (UID: "2dac6b04-d81b-43a0-8b71-ebaa8842366d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.855261 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-config-data" (OuterVolumeSpecName: "config-data") pod "2dac6b04-d81b-43a0-8b71-ebaa8842366d" (UID: "2dac6b04-d81b-43a0-8b71-ebaa8842366d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.890823 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.890856 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlwcd\" (UniqueName: \"kubernetes.io/projected/2dac6b04-d81b-43a0-8b71-ebaa8842366d-kube-api-access-jlwcd\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.890869 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.890881 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.016278 4761 generic.go:334] "Generic (PLEG): container finished" podID="2f7b5d35-c686-46fe-9e07-8f95cba61e5b" containerID="0110377348c876298de2a975b96c3aa38816fbe347c2c173440876eba190ce3d" exitCode=0 Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.016350 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-ddvxb" event={"ID":"2f7b5d35-c686-46fe-9e07-8f95cba61e5b","Type":"ContainerDied","Data":"0110377348c876298de2a975b96c3aa38816fbe347c2c173440876eba190ce3d"} Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.018755 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" event={"ID":"3fa2e962-e967-40fc-b5e5-4ae20c68a139","Type":"ContainerDied","Data":"c7f9427f615055e9a18c9397a7d87a5785d5dcd67c8486de7249009393b28b5e"} Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.018801 4761 scope.go:117] "RemoveContainer" containerID="b8012e41217590ca3360af9b406c062750b0e98b8b0bc957f29f8f2fff4b4956" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.018946 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.026500 4761 generic.go:334] "Generic (PLEG): container finished" podID="130238c4-fadf-46e2-a802-0608b83ec9a2" containerID="9c1e1a06fc0e08cdc250961ed8e0100243d00ee6fe2c789c7e55aa8258d1d22e" exitCode=0 Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.026550 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-77e3-account-create-update-8b9pf" event={"ID":"130238c4-fadf-46e2-a802-0608b83ec9a2","Type":"ContainerDied","Data":"9c1e1a06fc0e08cdc250961ed8e0100243d00ee6fe2c789c7e55aa8258d1d22e"} Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.032810 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.032860 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hg9sm" event={"ID":"2dac6b04-d81b-43a0-8b71-ebaa8842366d","Type":"ContainerDied","Data":"d9237cf1801de0d12b1730f2f31210aefaa0b8081e17380a565b5d30e41dac2e"} Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.032907 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9237cf1801de0d12b1730f2f31210aefaa0b8081e17380a565b5d30e41dac2e" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.059055 4761 scope.go:117] "RemoveContainer" containerID="1ecae72867ce15c7a0313b5c34b8ca58e83a3ffff4e98873805434f8cbe5b2e6" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.157606 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-965pw"] Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.170880 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-965pw"] Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.181868 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.182256 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-log" containerID="cri-o://58dee734df2c2e42b401af896e07c889c430ed2732c73baf08dc81398684e967" gracePeriod=30 Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.182818 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-api" containerID="cri-o://ac8a691cc2fd6e47a8c926f63560099be8b63de821acd956efaa772c91cc8f15" gracePeriod=30 Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.201235 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.201578 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-log" containerID="cri-o://b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda" gracePeriod=30 Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.202159 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-metadata" containerID="cri-o://379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660" gracePeriod=30 Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.534941 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.738997 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" path="/var/lib/kubelet/pods/3fa2e962-e967-40fc-b5e5-4ae20c68a139/volumes" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.767151 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.768030 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.768063 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.823020 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-combined-ca-bundle\") pod \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.823107 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-nova-metadata-tls-certs\") pod \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.823152 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-config-data\") pod \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.823240 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-logs\") pod \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.823324 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpx5r\" (UniqueName: \"kubernetes.io/projected/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-kube-api-access-fpx5r\") pod \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.839161 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-logs" (OuterVolumeSpecName: "logs") pod "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" (UID: "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.843617 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-kube-api-access-fpx5r" (OuterVolumeSpecName: "kube-api-access-fpx5r") pod "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" (UID: "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6"). InnerVolumeSpecName "kube-api-access-fpx5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.865231 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-config-data" (OuterVolumeSpecName: "config-data") pod "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" (UID: "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.877072 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" (UID: "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.918636 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" (UID: "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.925636 4761 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.925679 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.925691 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.925700 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpx5r\" (UniqueName: \"kubernetes.io/projected/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-kube-api-access-fpx5r\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.925708 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:43 crc kubenswrapper[4761]: E0307 08:14:43.946972 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode64fc67d_9589_470f_bac1_53ab06ccf63a.slice/crio-conmon-58dee734df2c2e42b401af896e07c889c430ed2732c73baf08dc81398684e967.scope\": RecentStats: unable to find data in memory cache]" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.043818 4761 generic.go:334] "Generic (PLEG): container finished" podID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerID="379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660" exitCode=0 Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.043857 4761 generic.go:334] "Generic (PLEG): container finished" podID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerID="b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda" exitCode=143 Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.043877 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.043918 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6","Type":"ContainerDied","Data":"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660"} Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.043981 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6","Type":"ContainerDied","Data":"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda"} Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.043995 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6","Type":"ContainerDied","Data":"23c91c1a68a575c6189c0620e46750852b0c3776c0d68333b0b91ac946bc68e6"} Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.044012 4761 scope.go:117] "RemoveContainer" containerID="379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.054448 4761 generic.go:334] "Generic (PLEG): container finished" podID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerID="58dee734df2c2e42b401af896e07c889c430ed2732c73baf08dc81398684e967" exitCode=143 Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.054515 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e64fc67d-9589-470f-bac1-53ab06ccf63a","Type":"ContainerDied","Data":"58dee734df2c2e42b401af896e07c889c430ed2732c73baf08dc81398684e967"} Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.058617 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerStarted","Data":"07338e0375850617de7a90d252dd69e08c516f72a7329bdb33d6d7250f0f8095"} Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.058896 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.058885 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9261fde2-342e-4a37-b8f9-f6715d09b003" containerName="nova-scheduler-scheduler" containerID="cri-o://19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" gracePeriod=30 Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.092779 4761 scope.go:117] "RemoveContainer" containerID="b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.129233 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.825062117 podStartE2EDuration="7.129187265s" podCreationTimestamp="2026-03-07 08:14:37 +0000 UTC" firstStartedPulling="2026-03-07 08:14:39.042872708 +0000 UTC m=+1535.952039183" lastFinishedPulling="2026-03-07 08:14:43.346997846 +0000 UTC m=+1540.256164331" observedRunningTime="2026-03-07 08:14:44.096693844 +0000 UTC m=+1541.005860329" watchObservedRunningTime="2026-03-07 08:14:44.129187265 +0000 UTC m=+1541.038353740" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.141622 4761 scope.go:117] "RemoveContainer" containerID="379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660" Mar 07 08:14:44 crc kubenswrapper[4761]: E0307 08:14:44.143602 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660\": container with ID starting with 379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660 not found: ID does not exist" containerID="379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.143927 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660"} err="failed to get container status \"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660\": rpc error: code = NotFound desc = could not find container \"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660\": container with ID starting with 379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660 not found: ID does not exist" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.144053 4761 scope.go:117] "RemoveContainer" containerID="b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda" Mar 07 08:14:44 crc kubenswrapper[4761]: E0307 08:14:44.148563 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda\": container with ID starting with b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda not found: ID does not exist" containerID="b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.148842 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda"} err="failed to get container status \"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda\": rpc error: code = NotFound desc = could not find container \"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda\": container with ID starting with b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda not found: ID does not exist" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.148931 4761 scope.go:117] "RemoveContainer" containerID="379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.152240 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660"} err="failed to get container status \"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660\": rpc error: code = NotFound desc = could not find container \"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660\": container with ID starting with 379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660 not found: ID does not exist" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.152396 4761 scope.go:117] "RemoveContainer" containerID="b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.154457 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda"} err="failed to get container status \"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda\": rpc error: code = NotFound desc = could not find container \"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda\": container with ID starting with b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda not found: ID does not exist" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.160950 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.174987 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.185686 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:44 crc kubenswrapper[4761]: E0307 08:14:44.186249 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerName="dnsmasq-dns" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186268 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerName="dnsmasq-dns" Mar 07 08:14:44 crc kubenswrapper[4761]: E0307 08:14:44.186280 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-log" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186286 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-log" Mar 07 08:14:44 crc kubenswrapper[4761]: E0307 08:14:44.186306 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-metadata" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186314 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-metadata" Mar 07 08:14:44 crc kubenswrapper[4761]: E0307 08:14:44.186337 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dac6b04-d81b-43a0-8b71-ebaa8842366d" containerName="nova-manage" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186343 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dac6b04-d81b-43a0-8b71-ebaa8842366d" containerName="nova-manage" Mar 07 08:14:44 crc kubenswrapper[4761]: E0307 08:14:44.186364 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerName="init" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186372 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerName="init" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186569 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dac6b04-d81b-43a0-8b71-ebaa8842366d" containerName="nova-manage" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186588 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-metadata" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186600 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-log" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186619 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerName="dnsmasq-dns" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.187814 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.194415 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.223707 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.224337 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.234981 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5bw5\" (UniqueName: \"kubernetes.io/projected/d8661e6f-7759-475f-8964-bae1b8cfebbe-kube-api-access-k5bw5\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.235020 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.235145 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-config-data\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.235178 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.235277 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8661e6f-7759-475f-8964-bae1b8cfebbe-logs\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.384914 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-config-data\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.385013 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.385327 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8661e6f-7759-475f-8964-bae1b8cfebbe-logs\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.385422 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5bw5\" (UniqueName: \"kubernetes.io/projected/d8661e6f-7759-475f-8964-bae1b8cfebbe-kube-api-access-k5bw5\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.385477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.387331 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8661e6f-7759-475f-8964-bae1b8cfebbe-logs\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.395466 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.396016 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-config-data\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.408679 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.409911 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5bw5\" (UniqueName: \"kubernetes.io/projected/d8661e6f-7759-475f-8964-bae1b8cfebbe-kube-api-access-k5bw5\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.535372 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.715566 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.755963 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.802424 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-operator-scripts\") pod \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.802602 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsbz4\" (UniqueName: \"kubernetes.io/projected/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-kube-api-access-xsbz4\") pod \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.803312 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f7b5d35-c686-46fe-9e07-8f95cba61e5b" (UID: "2f7b5d35-c686-46fe-9e07-8f95cba61e5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.808696 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-kube-api-access-xsbz4" (OuterVolumeSpecName: "kube-api-access-xsbz4") pod "2f7b5d35-c686-46fe-9e07-8f95cba61e5b" (UID: "2f7b5d35-c686-46fe-9e07-8f95cba61e5b"). InnerVolumeSpecName "kube-api-access-xsbz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.905206 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rsq9\" (UniqueName: \"kubernetes.io/projected/130238c4-fadf-46e2-a802-0608b83ec9a2-kube-api-access-2rsq9\") pod \"130238c4-fadf-46e2-a802-0608b83ec9a2\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.905652 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/130238c4-fadf-46e2-a802-0608b83ec9a2-operator-scripts\") pod \"130238c4-fadf-46e2-a802-0608b83ec9a2\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.906527 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsbz4\" (UniqueName: \"kubernetes.io/projected/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-kube-api-access-xsbz4\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.906552 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.907165 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/130238c4-fadf-46e2-a802-0608b83ec9a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "130238c4-fadf-46e2-a802-0608b83ec9a2" (UID: "130238c4-fadf-46e2-a802-0608b83ec9a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.909652 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/130238c4-fadf-46e2-a802-0608b83ec9a2-kube-api-access-2rsq9" (OuterVolumeSpecName: "kube-api-access-2rsq9") pod "130238c4-fadf-46e2-a802-0608b83ec9a2" (UID: "130238c4-fadf-46e2-a802-0608b83ec9a2"). InnerVolumeSpecName "kube-api-access-2rsq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.009273 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rsq9\" (UniqueName: \"kubernetes.io/projected/130238c4-fadf-46e2-a802-0608b83ec9a2-kube-api-access-2rsq9\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.009322 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/130238c4-fadf-46e2-a802-0608b83ec9a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.072783 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-ddvxb" event={"ID":"2f7b5d35-c686-46fe-9e07-8f95cba61e5b","Type":"ContainerDied","Data":"0be242e1b36456bf1a485bd4da7dd293d8757e8a4240d92472eda354490ffbfd"} Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.072822 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0be242e1b36456bf1a485bd4da7dd293d8757e8a4240d92472eda354490ffbfd" Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.072881 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.082091 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-77e3-account-create-update-8b9pf" event={"ID":"130238c4-fadf-46e2-a802-0608b83ec9a2","Type":"ContainerDied","Data":"49121ea378caaf9cc11422f1c3dd400f8b1339e25b96b77511d70d331664207b"} Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.082150 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49121ea378caaf9cc11422f1c3dd400f8b1339e25b96b77511d70d331664207b" Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.082105 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.121608 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.719688 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" path="/var/lib/kubelet/pods/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6/volumes" Mar 07 08:14:46 crc kubenswrapper[4761]: I0307 08:14:46.099783 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8661e6f-7759-475f-8964-bae1b8cfebbe","Type":"ContainerStarted","Data":"5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49"} Mar 07 08:14:46 crc kubenswrapper[4761]: I0307 08:14:46.100104 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8661e6f-7759-475f-8964-bae1b8cfebbe","Type":"ContainerStarted","Data":"f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110"} Mar 07 08:14:46 crc kubenswrapper[4761]: I0307 08:14:46.100120 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8661e6f-7759-475f-8964-bae1b8cfebbe","Type":"ContainerStarted","Data":"4186f52ac38d34bec86c4b23d24c511d84987ec76656cdaf97b0c90bf3b66e26"} Mar 07 08:14:46 crc kubenswrapper[4761]: I0307 08:14:46.134509 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.134481563 podStartE2EDuration="2.134481563s" podCreationTimestamp="2026-03-07 08:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:46.123428878 +0000 UTC m=+1543.032595393" watchObservedRunningTime="2026-03-07 08:14:46.134481563 +0000 UTC m=+1543.043648058" Mar 07 08:14:46 crc kubenswrapper[4761]: E0307 08:14:46.346361 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 07 08:14:46 crc kubenswrapper[4761]: E0307 08:14:46.348858 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 07 08:14:46 crc kubenswrapper[4761]: E0307 08:14:46.352773 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 07 08:14:46 crc kubenswrapper[4761]: E0307 08:14:46.352851 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9261fde2-342e-4a37-b8f9-f6715d09b003" containerName="nova-scheduler-scheduler" Mar 07 08:14:48 crc kubenswrapper[4761]: I0307 08:14:48.124978 4761 generic.go:334] "Generic (PLEG): container finished" podID="4931aa42-2c29-4ec8-ba24-e90210ad1aca" containerID="00517bee769197b1cd470a476b898df7ad9f81d3ab127b1e7dddf7ed79e2908b" exitCode=0 Mar 07 08:14:48 crc kubenswrapper[4761]: I0307 08:14:48.125507 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" event={"ID":"4931aa42-2c29-4ec8-ba24-e90210ad1aca","Type":"ContainerDied","Data":"00517bee769197b1cd470a476b898df7ad9f81d3ab127b1e7dddf7ed79e2908b"} Mar 07 08:14:48 crc kubenswrapper[4761]: I0307 08:14:48.877927 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.011666 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt6kp\" (UniqueName: \"kubernetes.io/projected/9261fde2-342e-4a37-b8f9-f6715d09b003-kube-api-access-gt6kp\") pod \"9261fde2-342e-4a37-b8f9-f6715d09b003\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.011747 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-combined-ca-bundle\") pod \"9261fde2-342e-4a37-b8f9-f6715d09b003\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.012158 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-config-data\") pod \"9261fde2-342e-4a37-b8f9-f6715d09b003\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.019201 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9261fde2-342e-4a37-b8f9-f6715d09b003-kube-api-access-gt6kp" (OuterVolumeSpecName: "kube-api-access-gt6kp") pod "9261fde2-342e-4a37-b8f9-f6715d09b003" (UID: "9261fde2-342e-4a37-b8f9-f6715d09b003"). InnerVolumeSpecName "kube-api-access-gt6kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.052960 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9261fde2-342e-4a37-b8f9-f6715d09b003" (UID: "9261fde2-342e-4a37-b8f9-f6715d09b003"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.059796 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-config-data" (OuterVolumeSpecName: "config-data") pod "9261fde2-342e-4a37-b8f9-f6715d09b003" (UID: "9261fde2-342e-4a37-b8f9-f6715d09b003"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.114944 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.114986 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt6kp\" (UniqueName: \"kubernetes.io/projected/9261fde2-342e-4a37-b8f9-f6715d09b003-kube-api-access-gt6kp\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.115001 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.138165 4761 generic.go:334] "Generic (PLEG): container finished" podID="9261fde2-342e-4a37-b8f9-f6715d09b003" containerID="19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" exitCode=0 Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.138227 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.138284 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9261fde2-342e-4a37-b8f9-f6715d09b003","Type":"ContainerDied","Data":"19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1"} Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.138312 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9261fde2-342e-4a37-b8f9-f6715d09b003","Type":"ContainerDied","Data":"c26fcbe9ab1e6a3c13a0c0ab87a0dcb9733d543c9e555bcd15e6fdc735b44d88"} Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.138328 4761 scope.go:117] "RemoveContainer" containerID="19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.182421 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.193066 4761 scope.go:117] "RemoveContainer" containerID="19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" Mar 07 08:14:49 crc kubenswrapper[4761]: E0307 08:14:49.195271 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1\": container with ID starting with 19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1 not found: ID does not exist" containerID="19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.195319 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1"} err="failed to get container status \"19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1\": rpc error: code = NotFound desc = could not find container \"19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1\": container with ID starting with 19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1 not found: ID does not exist" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.210594 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.223923 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:49 crc kubenswrapper[4761]: E0307 08:14:49.224618 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="130238c4-fadf-46e2-a802-0608b83ec9a2" containerName="mariadb-account-create-update" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.224653 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="130238c4-fadf-46e2-a802-0608b83ec9a2" containerName="mariadb-account-create-update" Mar 07 08:14:49 crc kubenswrapper[4761]: E0307 08:14:49.224701 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7b5d35-c686-46fe-9e07-8f95cba61e5b" containerName="mariadb-database-create" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.224729 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7b5d35-c686-46fe-9e07-8f95cba61e5b" containerName="mariadb-database-create" Mar 07 08:14:49 crc kubenswrapper[4761]: E0307 08:14:49.224752 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9261fde2-342e-4a37-b8f9-f6715d09b003" containerName="nova-scheduler-scheduler" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.224762 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9261fde2-342e-4a37-b8f9-f6715d09b003" containerName="nova-scheduler-scheduler" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.225047 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9261fde2-342e-4a37-b8f9-f6715d09b003" containerName="nova-scheduler-scheduler" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.225090 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="130238c4-fadf-46e2-a802-0608b83ec9a2" containerName="mariadb-account-create-update" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.225129 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f7b5d35-c686-46fe-9e07-8f95cba61e5b" containerName="mariadb-database-create" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.226334 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.228733 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.233658 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.319856 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-config-data\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.319974 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.320010 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bthnz\" (UniqueName: \"kubernetes.io/projected/83e95e07-cc49-4e75-a0e9-0299705fc32a-kube-api-access-bthnz\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.422821 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-config-data\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.423185 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.423234 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bthnz\" (UniqueName: \"kubernetes.io/projected/83e95e07-cc49-4e75-a0e9-0299705fc32a-kube-api-access-bthnz\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.436682 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.444405 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-config-data\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.449134 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bthnz\" (UniqueName: \"kubernetes.io/projected/83e95e07-cc49-4e75-a0e9-0299705fc32a-kube-api-access-bthnz\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.536176 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.536234 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.570072 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.722471 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9261fde2-342e-4a37-b8f9-f6715d09b003" path="/var/lib/kubelet/pods/9261fde2-342e-4a37-b8f9-f6715d09b003/volumes" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.814761 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.934279 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-combined-ca-bundle\") pod \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.934484 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-config-data\") pod \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.934592 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmw55\" (UniqueName: \"kubernetes.io/projected/4931aa42-2c29-4ec8-ba24-e90210ad1aca-kube-api-access-rmw55\") pod \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.934819 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-scripts\") pod \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.941491 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-scripts" (OuterVolumeSpecName: "scripts") pod "4931aa42-2c29-4ec8-ba24-e90210ad1aca" (UID: "4931aa42-2c29-4ec8-ba24-e90210ad1aca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.942190 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4931aa42-2c29-4ec8-ba24-e90210ad1aca-kube-api-access-rmw55" (OuterVolumeSpecName: "kube-api-access-rmw55") pod "4931aa42-2c29-4ec8-ba24-e90210ad1aca" (UID: "4931aa42-2c29-4ec8-ba24-e90210ad1aca"). InnerVolumeSpecName "kube-api-access-rmw55". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.970855 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4931aa42-2c29-4ec8-ba24-e90210ad1aca" (UID: "4931aa42-2c29-4ec8-ba24-e90210ad1aca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.978371 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-config-data" (OuterVolumeSpecName: "config-data") pod "4931aa42-2c29-4ec8-ba24-e90210ad1aca" (UID: "4931aa42-2c29-4ec8-ba24-e90210ad1aca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.038509 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.038546 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.038559 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.038574 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmw55\" (UniqueName: \"kubernetes.io/projected/4931aa42-2c29-4ec8-ba24-e90210ad1aca-kube-api-access-rmw55\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.062404 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:50 crc kubenswrapper[4761]: W0307 08:14:50.064751 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83e95e07_cc49_4e75_a0e9_0299705fc32a.slice/crio-c57f522bb33dbd2d3cf1cf8e0cf8793d2336fe8f26897174337fdee177604cb6 WatchSource:0}: Error finding container c57f522bb33dbd2d3cf1cf8e0cf8793d2336fe8f26897174337fdee177604cb6: Status 404 returned error can't find the container with id c57f522bb33dbd2d3cf1cf8e0cf8793d2336fe8f26897174337fdee177604cb6 Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.164031 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" event={"ID":"4931aa42-2c29-4ec8-ba24-e90210ad1aca","Type":"ContainerDied","Data":"cdcc6626262cfb72659a9389831ee3c44de2130d63ecb8bbea3fe4800a68c1f1"} Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.164294 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdcc6626262cfb72659a9389831ee3c44de2130d63ecb8bbea3fe4800a68c1f1" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.164348 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.168451 4761 generic.go:334] "Generic (PLEG): container finished" podID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerID="ac8a691cc2fd6e47a8c926f63560099be8b63de821acd956efaa772c91cc8f15" exitCode=0 Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.168529 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e64fc67d-9589-470f-bac1-53ab06ccf63a","Type":"ContainerDied","Data":"ac8a691cc2fd6e47a8c926f63560099be8b63de821acd956efaa772c91cc8f15"} Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.195057 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83e95e07-cc49-4e75-a0e9-0299705fc32a","Type":"ContainerStarted","Data":"c57f522bb33dbd2d3cf1cf8e0cf8793d2336fe8f26897174337fdee177604cb6"} Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.242353 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 07 08:14:50 crc kubenswrapper[4761]: E0307 08:14:50.242981 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4931aa42-2c29-4ec8-ba24-e90210ad1aca" containerName="nova-cell1-conductor-db-sync" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.242999 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4931aa42-2c29-4ec8-ba24-e90210ad1aca" containerName="nova-cell1-conductor-db-sync" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.243359 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4931aa42-2c29-4ec8-ba24-e90210ad1aca" containerName="nova-cell1-conductor-db-sync" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.244556 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.245179 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.249098 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.285821 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.345629 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e64fc67d-9589-470f-bac1-53ab06ccf63a-logs\") pod \"e64fc67d-9589-470f-bac1-53ab06ccf63a\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.345828 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-combined-ca-bundle\") pod \"e64fc67d-9589-470f-bac1-53ab06ccf63a\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.345910 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-config-data\") pod \"e64fc67d-9589-470f-bac1-53ab06ccf63a\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.345948 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4twrn\" (UniqueName: \"kubernetes.io/projected/e64fc67d-9589-470f-bac1-53ab06ccf63a-kube-api-access-4twrn\") pod \"e64fc67d-9589-470f-bac1-53ab06ccf63a\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.346245 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e64fc67d-9589-470f-bac1-53ab06ccf63a-logs" (OuterVolumeSpecName: "logs") pod "e64fc67d-9589-470f-bac1-53ab06ccf63a" (UID: "e64fc67d-9589-470f-bac1-53ab06ccf63a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.349234 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993e0457-91eb-4234-ad39-0855846b8d31-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.349392 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993e0457-91eb-4234-ad39-0855846b8d31-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.349439 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll677\" (UniqueName: \"kubernetes.io/projected/993e0457-91eb-4234-ad39-0855846b8d31-kube-api-access-ll677\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.349530 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e64fc67d-9589-470f-bac1-53ab06ccf63a-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.351253 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e64fc67d-9589-470f-bac1-53ab06ccf63a-kube-api-access-4twrn" (OuterVolumeSpecName: "kube-api-access-4twrn") pod "e64fc67d-9589-470f-bac1-53ab06ccf63a" (UID: "e64fc67d-9589-470f-bac1-53ab06ccf63a"). InnerVolumeSpecName "kube-api-access-4twrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.382523 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-config-data" (OuterVolumeSpecName: "config-data") pod "e64fc67d-9589-470f-bac1-53ab06ccf63a" (UID: "e64fc67d-9589-470f-bac1-53ab06ccf63a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.387926 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e64fc67d-9589-470f-bac1-53ab06ccf63a" (UID: "e64fc67d-9589-470f-bac1-53ab06ccf63a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.451477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993e0457-91eb-4234-ad39-0855846b8d31-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.451853 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993e0457-91eb-4234-ad39-0855846b8d31-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.451890 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll677\" (UniqueName: \"kubernetes.io/projected/993e0457-91eb-4234-ad39-0855846b8d31-kube-api-access-ll677\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.451980 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.451994 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4twrn\" (UniqueName: \"kubernetes.io/projected/e64fc67d-9589-470f-bac1-53ab06ccf63a-kube-api-access-4twrn\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.452005 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.455437 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993e0457-91eb-4234-ad39-0855846b8d31-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.458047 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993e0457-91eb-4234-ad39-0855846b8d31-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.471662 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll677\" (UniqueName: \"kubernetes.io/projected/993e0457-91eb-4234-ad39-0855846b8d31-kube-api-access-ll677\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.584399 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.740764 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-s8wjr"] Mar 07 08:14:50 crc kubenswrapper[4761]: E0307 08:14:50.741698 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-api" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.741736 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-api" Mar 07 08:14:50 crc kubenswrapper[4761]: E0307 08:14:50.741778 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-log" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.741788 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-log" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.742084 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-log" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.742116 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-api" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.743154 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.745624 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.746055 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.747174 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.747392 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-wcdfq" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.752433 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-s8wjr"] Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.862757 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v52cm\" (UniqueName: \"kubernetes.io/projected/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-kube-api-access-v52cm\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.863046 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-scripts\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.863162 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-config-data\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.863338 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-combined-ca-bundle\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.967481 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-config-data\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.967625 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-combined-ca-bundle\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.967870 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v52cm\" (UniqueName: \"kubernetes.io/projected/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-kube-api-access-v52cm\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.967917 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-scripts\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.972776 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-scripts\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.973385 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-config-data\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.981298 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-combined-ca-bundle\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.988361 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v52cm\" (UniqueName: \"kubernetes.io/projected/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-kube-api-access-v52cm\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.074380 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.110544 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.235159 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e64fc67d-9589-470f-bac1-53ab06ccf63a","Type":"ContainerDied","Data":"d29b0f6f71bec0433060335fb0e11ffa4d4c536c6966ca333d10fff1bcacec70"} Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.236333 4761 scope.go:117] "RemoveContainer" containerID="ac8a691cc2fd6e47a8c926f63560099be8b63de821acd956efaa772c91cc8f15" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.236493 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.242154 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83e95e07-cc49-4e75-a0e9-0299705fc32a","Type":"ContainerStarted","Data":"73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a"} Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.249112 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"993e0457-91eb-4234-ad39-0855846b8d31","Type":"ContainerStarted","Data":"3128d9b4bce41cca1685e0f8982ba2180094ecf72ca915585a6575eeaf8abd49"} Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.259618 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.259594748 podStartE2EDuration="2.259594748s" podCreationTimestamp="2026-03-07 08:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:51.258217304 +0000 UTC m=+1548.167383779" watchObservedRunningTime="2026-03-07 08:14:51.259594748 +0000 UTC m=+1548.168761213" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.302628 4761 scope.go:117] "RemoveContainer" containerID="58dee734df2c2e42b401af896e07c889c430ed2732c73baf08dc81398684e967" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.308358 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.321900 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.337664 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.339658 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.342062 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.348842 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.482607 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmdjm\" (UniqueName: \"kubernetes.io/projected/ef52f146-bda3-462c-9d12-7f3200a1161b-kube-api-access-mmdjm\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.483084 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-config-data\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.483309 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.483387 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef52f146-bda3-462c-9d12-7f3200a1161b-logs\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: W0307 08:14:51.579366 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60fdff4b_2ca4_472c_8c44_40101c4a8fe1.slice/crio-dc0d99006cc692c9e2fb824684ed1095ac9a53fc8c16adec38f9c439e1fc3f9c WatchSource:0}: Error finding container dc0d99006cc692c9e2fb824684ed1095ac9a53fc8c16adec38f9c439e1fc3f9c: Status 404 returned error can't find the container with id dc0d99006cc692c9e2fb824684ed1095ac9a53fc8c16adec38f9c439e1fc3f9c Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.579442 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-s8wjr"] Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.587179 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.587232 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef52f146-bda3-462c-9d12-7f3200a1161b-logs\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.587303 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmdjm\" (UniqueName: \"kubernetes.io/projected/ef52f146-bda3-462c-9d12-7f3200a1161b-kube-api-access-mmdjm\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.587372 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-config-data\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.588750 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef52f146-bda3-462c-9d12-7f3200a1161b-logs\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.592507 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-config-data\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.593475 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.614904 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmdjm\" (UniqueName: \"kubernetes.io/projected/ef52f146-bda3-462c-9d12-7f3200a1161b-kube-api-access-mmdjm\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.670453 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.722132 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" path="/var/lib/kubelet/pods/e64fc67d-9589-470f-bac1-53ab06ccf63a/volumes" Mar 07 08:14:52 crc kubenswrapper[4761]: I0307 08:14:52.179572 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:52 crc kubenswrapper[4761]: I0307 08:14:52.271034 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"993e0457-91eb-4234-ad39-0855846b8d31","Type":"ContainerStarted","Data":"6f4ff532ad26fe3b5ca7b8e749e9f1b1ba9162093ecc338b8cc1e62c51e46111"} Mar 07 08:14:52 crc kubenswrapper[4761]: I0307 08:14:52.271223 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:52 crc kubenswrapper[4761]: I0307 08:14:52.276911 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s8wjr" event={"ID":"60fdff4b-2ca4-472c-8c44-40101c4a8fe1","Type":"ContainerStarted","Data":"dc0d99006cc692c9e2fb824684ed1095ac9a53fc8c16adec38f9c439e1fc3f9c"} Mar 07 08:14:52 crc kubenswrapper[4761]: I0307 08:14:52.289289 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef52f146-bda3-462c-9d12-7f3200a1161b","Type":"ContainerStarted","Data":"38c8e87693ca31cdfaa2df02df9f5f7dbd3ac9a1d760025ec704d4f8e0d85068"} Mar 07 08:14:52 crc kubenswrapper[4761]: I0307 08:14:52.306576 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.306538803 podStartE2EDuration="2.306538803s" podCreationTimestamp="2026-03-07 08:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:52.29001785 +0000 UTC m=+1549.199184345" watchObservedRunningTime="2026-03-07 08:14:52.306538803 +0000 UTC m=+1549.215705278" Mar 07 08:14:53 crc kubenswrapper[4761]: I0307 08:14:53.307396 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef52f146-bda3-462c-9d12-7f3200a1161b","Type":"ContainerStarted","Data":"ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44"} Mar 07 08:14:53 crc kubenswrapper[4761]: I0307 08:14:53.308065 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef52f146-bda3-462c-9d12-7f3200a1161b","Type":"ContainerStarted","Data":"7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4"} Mar 07 08:14:53 crc kubenswrapper[4761]: I0307 08:14:53.339377 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.339354514 podStartE2EDuration="2.339354514s" podCreationTimestamp="2026-03-07 08:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:53.330149085 +0000 UTC m=+1550.239315550" watchObservedRunningTime="2026-03-07 08:14:53.339354514 +0000 UTC m=+1550.248520999" Mar 07 08:14:54 crc kubenswrapper[4761]: I0307 08:14:54.536289 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 07 08:14:54 crc kubenswrapper[4761]: I0307 08:14:54.536591 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 07 08:14:54 crc kubenswrapper[4761]: I0307 08:14:54.570983 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 07 08:14:56 crc kubenswrapper[4761]: I0307 08:14:56.030960 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.0:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 08:14:56 crc kubenswrapper[4761]: I0307 08:14:56.031007 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.0:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 08:14:56 crc kubenswrapper[4761]: I0307 08:14:56.346347 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s8wjr" event={"ID":"60fdff4b-2ca4-472c-8c44-40101c4a8fe1","Type":"ContainerStarted","Data":"ee98daeed4689551f2d9b8f315dc5f2150a8e0d8bb1624db07ae27201527b436"} Mar 07 08:14:56 crc kubenswrapper[4761]: I0307 08:14:56.370903 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-s8wjr" podStartSLOduration=2.239036948 podStartE2EDuration="6.370876309s" podCreationTimestamp="2026-03-07 08:14:50 +0000 UTC" firstStartedPulling="2026-03-07 08:14:51.582844281 +0000 UTC m=+1548.492010756" lastFinishedPulling="2026-03-07 08:14:55.714683652 +0000 UTC m=+1552.623850117" observedRunningTime="2026-03-07 08:14:56.362791318 +0000 UTC m=+1553.271957813" watchObservedRunningTime="2026-03-07 08:14:56.370876309 +0000 UTC m=+1553.280042804" Mar 07 08:14:58 crc kubenswrapper[4761]: I0307 08:14:58.380399 4761 generic.go:334] "Generic (PLEG): container finished" podID="60fdff4b-2ca4-472c-8c44-40101c4a8fe1" containerID="ee98daeed4689551f2d9b8f315dc5f2150a8e0d8bb1624db07ae27201527b436" exitCode=0 Mar 07 08:14:58 crc kubenswrapper[4761]: I0307 08:14:58.380462 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s8wjr" event={"ID":"60fdff4b-2ca4-472c-8c44-40101c4a8fe1","Type":"ContainerDied","Data":"ee98daeed4689551f2d9b8f315dc5f2150a8e0d8bb1624db07ae27201527b436"} Mar 07 08:14:59 crc kubenswrapper[4761]: I0307 08:14:59.570771 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 07 08:14:59 crc kubenswrapper[4761]: I0307 08:14:59.620660 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 07 08:14:59 crc kubenswrapper[4761]: I0307 08:14:59.947673 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.001035 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-scripts\") pod \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.001316 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-combined-ca-bundle\") pod \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.001377 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v52cm\" (UniqueName: \"kubernetes.io/projected/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-kube-api-access-v52cm\") pod \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.001462 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-config-data\") pod \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.009453 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-kube-api-access-v52cm" (OuterVolumeSpecName: "kube-api-access-v52cm") pod "60fdff4b-2ca4-472c-8c44-40101c4a8fe1" (UID: "60fdff4b-2ca4-472c-8c44-40101c4a8fe1"). InnerVolumeSpecName "kube-api-access-v52cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.009773 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-scripts" (OuterVolumeSpecName: "scripts") pod "60fdff4b-2ca4-472c-8c44-40101c4a8fe1" (UID: "60fdff4b-2ca4-472c-8c44-40101c4a8fe1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.036343 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60fdff4b-2ca4-472c-8c44-40101c4a8fe1" (UID: "60fdff4b-2ca4-472c-8c44-40101c4a8fe1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.057966 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-config-data" (OuterVolumeSpecName: "config-data") pod "60fdff4b-2ca4-472c-8c44-40101c4a8fe1" (UID: "60fdff4b-2ca4-472c-8c44-40101c4a8fe1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.152313 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v52cm\" (UniqueName: \"kubernetes.io/projected/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-kube-api-access-v52cm\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.152368 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.152386 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.152404 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.181657 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn"] Mar 07 08:15:00 crc kubenswrapper[4761]: E0307 08:15:00.182354 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fdff4b-2ca4-472c-8c44-40101c4a8fe1" containerName="aodh-db-sync" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.182379 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fdff4b-2ca4-472c-8c44-40101c4a8fe1" containerName="aodh-db-sync" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.182657 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="60fdff4b-2ca4-472c-8c44-40101c4a8fe1" containerName="aodh-db-sync" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.183744 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.187181 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.187624 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.202266 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn"] Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.356857 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/840f778c-fb9b-4f24-b884-fb58aa298ad5-config-volume\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.357451 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85gjv\" (UniqueName: \"kubernetes.io/projected/840f778c-fb9b-4f24-b884-fb58aa298ad5-kube-api-access-85gjv\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.357808 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/840f778c-fb9b-4f24-b884-fb58aa298ad5-secret-volume\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.407603 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s8wjr" event={"ID":"60fdff4b-2ca4-472c-8c44-40101c4a8fe1","Type":"ContainerDied","Data":"dc0d99006cc692c9e2fb824684ed1095ac9a53fc8c16adec38f9c439e1fc3f9c"} Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.407663 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc0d99006cc692c9e2fb824684ed1095ac9a53fc8c16adec38f9c439e1fc3f9c" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.407699 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.448710 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.466024 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85gjv\" (UniqueName: \"kubernetes.io/projected/840f778c-fb9b-4f24-b884-fb58aa298ad5-kube-api-access-85gjv\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.466228 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/840f778c-fb9b-4f24-b884-fb58aa298ad5-secret-volume\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.466321 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/840f778c-fb9b-4f24-b884-fb58aa298ad5-config-volume\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.467444 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/840f778c-fb9b-4f24-b884-fb58aa298ad5-config-volume\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.476362 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/840f778c-fb9b-4f24-b884-fb58aa298ad5-secret-volume\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.487140 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85gjv\" (UniqueName: \"kubernetes.io/projected/840f778c-fb9b-4f24-b884-fb58aa298ad5-kube-api-access-85gjv\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.509143 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.619992 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.811965 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.815342 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.821196 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.821265 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-wcdfq" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.822311 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.825867 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.979006 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-997bx\" (UniqueName: \"kubernetes.io/projected/5f20b55e-e643-4c84-8929-dccc23092137-kube-api-access-997bx\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.979137 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.979283 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-config-data\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.979305 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-scripts\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.986158 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn"] Mar 07 08:15:00 crc kubenswrapper[4761]: W0307 08:15:00.992501 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod840f778c_fb9b_4f24_b884_fb58aa298ad5.slice/crio-f2cdeb4bfd23cd8e46898accc963a6f1c34cab90f747a4cc737be76baa9ee31f WatchSource:0}: Error finding container f2cdeb4bfd23cd8e46898accc963a6f1c34cab90f747a4cc737be76baa9ee31f: Status 404 returned error can't find the container with id f2cdeb4bfd23cd8e46898accc963a6f1c34cab90f747a4cc737be76baa9ee31f Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.081066 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-config-data\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.081112 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-scripts\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.081191 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-997bx\" (UniqueName: \"kubernetes.io/projected/5f20b55e-e643-4c84-8929-dccc23092137-kube-api-access-997bx\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.081255 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.088397 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-config-data\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.089650 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.089937 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-scripts\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.101358 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-997bx\" (UniqueName: \"kubernetes.io/projected/5f20b55e-e643-4c84-8929-dccc23092137-kube-api-access-997bx\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.196698 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.426836 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" event={"ID":"840f778c-fb9b-4f24-b884-fb58aa298ad5","Type":"ContainerStarted","Data":"5c05a441b88ae639cd727974fecbf1db12e7886b856bb7b0883e62ba8cee569b"} Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.427170 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" event={"ID":"840f778c-fb9b-4f24-b884-fb58aa298ad5","Type":"ContainerStarted","Data":"f2cdeb4bfd23cd8e46898accc963a6f1c34cab90f747a4cc737be76baa9ee31f"} Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.470106 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" podStartSLOduration=1.470083419 podStartE2EDuration="1.470083419s" podCreationTimestamp="2026-03-07 08:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:01.442662435 +0000 UTC m=+1558.351828920" watchObservedRunningTime="2026-03-07 08:15:01.470083419 +0000 UTC m=+1558.379249894" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.671432 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.672006 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.772324 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:02 crc kubenswrapper[4761]: I0307 08:15:02.439330 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerStarted","Data":"85e34ca45a813a41b0ef29847196fda16d966d169d96229373dd4870bf277c21"} Mar 07 08:15:02 crc kubenswrapper[4761]: I0307 08:15:02.441696 4761 generic.go:334] "Generic (PLEG): container finished" podID="840f778c-fb9b-4f24-b884-fb58aa298ad5" containerID="5c05a441b88ae639cd727974fecbf1db12e7886b856bb7b0883e62ba8cee569b" exitCode=0 Mar 07 08:15:02 crc kubenswrapper[4761]: I0307 08:15:02.441770 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" event={"ID":"840f778c-fb9b-4f24-b884-fb58aa298ad5","Type":"ContainerDied","Data":"5c05a441b88ae639cd727974fecbf1db12e7886b856bb7b0883e62ba8cee569b"} Mar 07 08:15:02 crc kubenswrapper[4761]: I0307 08:15:02.754429 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 08:15:02 crc kubenswrapper[4761]: I0307 08:15:02.754885 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 08:15:03 crc kubenswrapper[4761]: I0307 08:15:03.487059 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerStarted","Data":"4e7d19ecfc8d3734356a0832721b8ff789bad3f1d623fbc7262aab81f59906fc"} Mar 07 08:15:03 crc kubenswrapper[4761]: I0307 08:15:03.821548 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:03 crc kubenswrapper[4761]: I0307 08:15:03.822129 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-central-agent" containerID="cri-o://69b4bf84dd39df6b7d9b398c110264cd806fbdc4859293bf644ef1767167f6e9" gracePeriod=30 Mar 07 08:15:03 crc kubenswrapper[4761]: I0307 08:15:03.822969 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="proxy-httpd" containerID="cri-o://07338e0375850617de7a90d252dd69e08c516f72a7329bdb33d6d7250f0f8095" gracePeriod=30 Mar 07 08:15:03 crc kubenswrapper[4761]: I0307 08:15:03.823018 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="sg-core" containerID="cri-o://178b4b5a5b5a97c98c5a01eeef66b7b962e2bb8a1f3fd5c70b486b42f553a81f" gracePeriod=30 Mar 07 08:15:03 crc kubenswrapper[4761]: I0307 08:15:03.823047 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-notification-agent" containerID="cri-o://9fbf4f9d40a0ec24b8dea09bb5d46ee8c49f0582f2fa196ad53b3fa0be0e0a4f" gracePeriod=30 Mar 07 08:15:03 crc kubenswrapper[4761]: I0307 08:15:03.931834 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.252:3000/\": read tcp 10.217.0.2:39874->10.217.0.252:3000: read: connection reset by peer" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.282251 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.502172 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" event={"ID":"840f778c-fb9b-4f24-b884-fb58aa298ad5","Type":"ContainerDied","Data":"f2cdeb4bfd23cd8e46898accc963a6f1c34cab90f747a4cc737be76baa9ee31f"} Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.502440 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2cdeb4bfd23cd8e46898accc963a6f1c34cab90f747a4cc737be76baa9ee31f" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.508123 4761 generic.go:334] "Generic (PLEG): container finished" podID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerID="07338e0375850617de7a90d252dd69e08c516f72a7329bdb33d6d7250f0f8095" exitCode=0 Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.508505 4761 generic.go:334] "Generic (PLEG): container finished" podID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerID="178b4b5a5b5a97c98c5a01eeef66b7b962e2bb8a1f3fd5c70b486b42f553a81f" exitCode=2 Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.508610 4761 generic.go:334] "Generic (PLEG): container finished" podID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerID="69b4bf84dd39df6b7d9b398c110264cd806fbdc4859293bf644ef1767167f6e9" exitCode=0 Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.508688 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerDied","Data":"07338e0375850617de7a90d252dd69e08c516f72a7329bdb33d6d7250f0f8095"} Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.508819 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerDied","Data":"178b4b5a5b5a97c98c5a01eeef66b7b962e2bb8a1f3fd5c70b486b42f553a81f"} Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.508906 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerDied","Data":"69b4bf84dd39df6b7d9b398c110264cd806fbdc4859293bf644ef1767167f6e9"} Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.533424 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.544009 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.554587 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.572128 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.612593 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/840f778c-fb9b-4f24-b884-fb58aa298ad5-config-volume\") pod \"840f778c-fb9b-4f24-b884-fb58aa298ad5\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.613307 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/840f778c-fb9b-4f24-b884-fb58aa298ad5-secret-volume\") pod \"840f778c-fb9b-4f24-b884-fb58aa298ad5\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.613347 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85gjv\" (UniqueName: \"kubernetes.io/projected/840f778c-fb9b-4f24-b884-fb58aa298ad5-kube-api-access-85gjv\") pod \"840f778c-fb9b-4f24-b884-fb58aa298ad5\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.613514 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/840f778c-fb9b-4f24-b884-fb58aa298ad5-config-volume" (OuterVolumeSpecName: "config-volume") pod "840f778c-fb9b-4f24-b884-fb58aa298ad5" (UID: "840f778c-fb9b-4f24-b884-fb58aa298ad5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.614576 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/840f778c-fb9b-4f24-b884-fb58aa298ad5-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.635054 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/840f778c-fb9b-4f24-b884-fb58aa298ad5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "840f778c-fb9b-4f24-b884-fb58aa298ad5" (UID: "840f778c-fb9b-4f24-b884-fb58aa298ad5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.644525 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/840f778c-fb9b-4f24-b884-fb58aa298ad5-kube-api-access-85gjv" (OuterVolumeSpecName: "kube-api-access-85gjv") pod "840f778c-fb9b-4f24-b884-fb58aa298ad5" (UID: "840f778c-fb9b-4f24-b884-fb58aa298ad5"). InnerVolumeSpecName "kube-api-access-85gjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.716453 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/840f778c-fb9b-4f24-b884-fb58aa298ad5-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.716643 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85gjv\" (UniqueName: \"kubernetes.io/projected/840f778c-fb9b-4f24-b884-fb58aa298ad5-kube-api-access-85gjv\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.524063 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerStarted","Data":"62efc0d0d775ac67bd7a9ef68d8de2bc66a98a78db91f3f6ce87d93e2f2a1663"} Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.527199 4761 generic.go:334] "Generic (PLEG): container finished" podID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerID="9fbf4f9d40a0ec24b8dea09bb5d46ee8c49f0582f2fa196ad53b3fa0be0e0a4f" exitCode=0 Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.527302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerDied","Data":"9fbf4f9d40a0ec24b8dea09bb5d46ee8c49f0582f2fa196ad53b3fa0be0e0a4f"} Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.527362 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerDied","Data":"44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4"} Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.527390 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.527467 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.538396 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.590585 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.742577 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-run-httpd\") pod \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.743022 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-log-httpd\") pod \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.743062 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-combined-ca-bundle\") pod \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.743363 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "204cf001-190d-4ecc-9bbf-7ba7fe2bad14" (UID: "204cf001-190d-4ecc-9bbf-7ba7fe2bad14"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.743502 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "204cf001-190d-4ecc-9bbf-7ba7fe2bad14" (UID: "204cf001-190d-4ecc-9bbf-7ba7fe2bad14"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.743908 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-scripts\") pod \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.743971 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkszg\" (UniqueName: \"kubernetes.io/projected/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-kube-api-access-zkszg\") pod \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.744210 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-sg-core-conf-yaml\") pod \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.744248 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-config-data\") pod \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.745594 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.745633 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.752103 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-scripts" (OuterVolumeSpecName: "scripts") pod "204cf001-190d-4ecc-9bbf-7ba7fe2bad14" (UID: "204cf001-190d-4ecc-9bbf-7ba7fe2bad14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.752122 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-kube-api-access-zkszg" (OuterVolumeSpecName: "kube-api-access-zkszg") pod "204cf001-190d-4ecc-9bbf-7ba7fe2bad14" (UID: "204cf001-190d-4ecc-9bbf-7ba7fe2bad14"). InnerVolumeSpecName "kube-api-access-zkszg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.786945 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "204cf001-190d-4ecc-9bbf-7ba7fe2bad14" (UID: "204cf001-190d-4ecc-9bbf-7ba7fe2bad14"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.851985 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.852015 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.852025 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkszg\" (UniqueName: \"kubernetes.io/projected/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-kube-api-access-zkszg\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.925626 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-config-data" (OuterVolumeSpecName: "config-data") pod "204cf001-190d-4ecc-9bbf-7ba7fe2bad14" (UID: "204cf001-190d-4ecc-9bbf-7ba7fe2bad14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.927132 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "204cf001-190d-4ecc-9bbf-7ba7fe2bad14" (UID: "204cf001-190d-4ecc-9bbf-7ba7fe2bad14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.954899 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.954934 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.543424 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerStarted","Data":"c15a8f34b90748d4123aa8305d977da985ad4ee833bd6258f7893d25a0f01981"} Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.543461 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.601558 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.630796 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.653827 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:06 crc kubenswrapper[4761]: E0307 08:15:06.654568 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-central-agent" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.654648 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-central-agent" Mar 07 08:15:06 crc kubenswrapper[4761]: E0307 08:15:06.654736 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-notification-agent" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.654802 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-notification-agent" Mar 07 08:15:06 crc kubenswrapper[4761]: E0307 08:15:06.654869 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="proxy-httpd" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.654919 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="proxy-httpd" Mar 07 08:15:06 crc kubenswrapper[4761]: E0307 08:15:06.654983 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="840f778c-fb9b-4f24-b884-fb58aa298ad5" containerName="collect-profiles" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.655032 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="840f778c-fb9b-4f24-b884-fb58aa298ad5" containerName="collect-profiles" Mar 07 08:15:06 crc kubenswrapper[4761]: E0307 08:15:06.655095 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="sg-core" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.655149 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="sg-core" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.655439 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-central-agent" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.655533 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="sg-core" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.655609 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-notification-agent" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.655673 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="proxy-httpd" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.655761 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="840f778c-fb9b-4f24-b884-fb58aa298ad5" containerName="collect-profiles" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.657906 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.667115 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.667403 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.696679 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.776987 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-log-httpd\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.777126 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.777208 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxx6c\" (UniqueName: \"kubernetes.io/projected/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-kube-api-access-rxx6c\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.777241 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-config-data\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.777359 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-scripts\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.777389 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-run-httpd\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.777445 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.880328 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxx6c\" (UniqueName: \"kubernetes.io/projected/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-kube-api-access-rxx6c\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.880531 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-config-data\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.880768 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-scripts\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.880812 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-run-httpd\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.880930 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.881039 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-log-httpd\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.881162 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.881775 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-run-httpd\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.881899 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-log-httpd\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.887940 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-config-data\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.890366 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-scripts\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.895543 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.903003 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.923134 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxx6c\" (UniqueName: \"kubernetes.io/projected/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-kube-api-access-rxx6c\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.980063 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:07 crc kubenswrapper[4761]: I0307 08:15:07.467745 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:07 crc kubenswrapper[4761]: I0307 08:15:07.721315 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" path="/var/lib/kubelet/pods/204cf001-190d-4ecc-9bbf-7ba7fe2bad14/volumes" Mar 07 08:15:08 crc kubenswrapper[4761]: E0307 08:15:08.243498 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.441751 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.514497 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-combined-ca-bundle\") pod \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.514811 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-config-data\") pod \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.514872 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rlfs\" (UniqueName: \"kubernetes.io/projected/1cceca9f-0dae-4298-b495-2c2e09e6e63d-kube-api-access-9rlfs\") pod \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.519448 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cceca9f-0dae-4298-b495-2c2e09e6e63d-kube-api-access-9rlfs" (OuterVolumeSpecName: "kube-api-access-9rlfs") pod "1cceca9f-0dae-4298-b495-2c2e09e6e63d" (UID: "1cceca9f-0dae-4298-b495-2c2e09e6e63d"). InnerVolumeSpecName "kube-api-access-9rlfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.548837 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cceca9f-0dae-4298-b495-2c2e09e6e63d" (UID: "1cceca9f-0dae-4298-b495-2c2e09e6e63d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.557317 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-config-data" (OuterVolumeSpecName: "config-data") pod "1cceca9f-0dae-4298-b495-2c2e09e6e63d" (UID: "1cceca9f-0dae-4298-b495-2c2e09e6e63d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.579925 4761 generic.go:334] "Generic (PLEG): container finished" podID="1cceca9f-0dae-4298-b495-2c2e09e6e63d" containerID="f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65" exitCode=137 Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.580003 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1cceca9f-0dae-4298-b495-2c2e09e6e63d","Type":"ContainerDied","Data":"f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65"} Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.580039 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1cceca9f-0dae-4298-b495-2c2e09e6e63d","Type":"ContainerDied","Data":"92a1a74929d44f1124508269aca2a63594ce2385d95da027c7837c515cc36b31"} Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.580056 4761 scope.go:117] "RemoveContainer" containerID="f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.580182 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.603107 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerStarted","Data":"24ce24b7ae154c50bbadc9b227eb94ce0080c6f5a420a8791dcddd59fe83f5fc"} Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.603302 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-api" containerID="cri-o://4e7d19ecfc8d3734356a0832721b8ff789bad3f1d623fbc7262aab81f59906fc" gracePeriod=30 Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.603990 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-listener" containerID="cri-o://24ce24b7ae154c50bbadc9b227eb94ce0080c6f5a420a8791dcddd59fe83f5fc" gracePeriod=30 Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.604060 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-notifier" containerID="cri-o://c15a8f34b90748d4123aa8305d977da985ad4ee833bd6258f7893d25a0f01981" gracePeriod=30 Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.604108 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-evaluator" containerID="cri-o://62efc0d0d775ac67bd7a9ef68d8de2bc66a98a78db91f3f6ce87d93e2f2a1663" gracePeriod=30 Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.620435 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerStarted","Data":"4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9"} Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.620482 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerStarted","Data":"ea042ceae08044af6a3f124f836f1b6e9fc3f0772a9090100be6273d6eae324e"} Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.620660 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.620693 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rlfs\" (UniqueName: \"kubernetes.io/projected/1cceca9f-0dae-4298-b495-2c2e09e6e63d-kube-api-access-9rlfs\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.620704 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.638180 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.7253733799999997 podStartE2EDuration="8.638160882s" podCreationTimestamp="2026-03-07 08:15:00 +0000 UTC" firstStartedPulling="2026-03-07 08:15:01.783831505 +0000 UTC m=+1558.692997980" lastFinishedPulling="2026-03-07 08:15:07.696618977 +0000 UTC m=+1564.605785482" observedRunningTime="2026-03-07 08:15:08.621383474 +0000 UTC m=+1565.530549949" watchObservedRunningTime="2026-03-07 08:15:08.638160882 +0000 UTC m=+1565.547327357" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.705036 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.730800 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.736255 4761 scope.go:117] "RemoveContainer" containerID="f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65" Mar 07 08:15:08 crc kubenswrapper[4761]: E0307 08:15:08.737063 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65\": container with ID starting with f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65 not found: ID does not exist" containerID="f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.737100 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65"} err="failed to get container status \"f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65\": rpc error: code = NotFound desc = could not find container \"f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65\": container with ID starting with f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65 not found: ID does not exist" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.753839 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:15:08 crc kubenswrapper[4761]: E0307 08:15:08.754491 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cceca9f-0dae-4298-b495-2c2e09e6e63d" containerName="nova-cell1-novncproxy-novncproxy" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.754516 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cceca9f-0dae-4298-b495-2c2e09e6e63d" containerName="nova-cell1-novncproxy-novncproxy" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.754884 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cceca9f-0dae-4298-b495-2c2e09e6e63d" containerName="nova-cell1-novncproxy-novncproxy" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.755934 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.760110 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.760729 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.760817 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.774185 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.828790 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjbt9\" (UniqueName: \"kubernetes.io/projected/ff986583-4706-47fa-9fec-eb503de7cac1-kube-api-access-bjbt9\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.828848 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.828904 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.829058 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.829131 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.932406 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.932510 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjbt9\" (UniqueName: \"kubernetes.io/projected/ff986583-4706-47fa-9fec-eb503de7cac1-kube-api-access-bjbt9\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.932542 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.932590 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.932672 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.937238 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.937307 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.938072 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.938154 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.954446 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjbt9\" (UniqueName: \"kubernetes.io/projected/ff986583-4706-47fa-9fec-eb503de7cac1-kube-api-access-bjbt9\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.077500 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.624797 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:15:09 crc kubenswrapper[4761]: W0307 08:15:09.631413 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff986583_4706_47fa_9fec_eb503de7cac1.slice/crio-99bef01e612d3b988be818139917bf4a9f445c5128003237e591fa27497f3a6b WatchSource:0}: Error finding container 99bef01e612d3b988be818139917bf4a9f445c5128003237e591fa27497f3a6b: Status 404 returned error can't find the container with id 99bef01e612d3b988be818139917bf4a9f445c5128003237e591fa27497f3a6b Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.647860 4761 generic.go:334] "Generic (PLEG): container finished" podID="5f20b55e-e643-4c84-8929-dccc23092137" containerID="62efc0d0d775ac67bd7a9ef68d8de2bc66a98a78db91f3f6ce87d93e2f2a1663" exitCode=0 Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.647892 4761 generic.go:334] "Generic (PLEG): container finished" podID="5f20b55e-e643-4c84-8929-dccc23092137" containerID="4e7d19ecfc8d3734356a0832721b8ff789bad3f1d623fbc7262aab81f59906fc" exitCode=0 Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.647935 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerDied","Data":"62efc0d0d775ac67bd7a9ef68d8de2bc66a98a78db91f3f6ce87d93e2f2a1663"} Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.647960 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerDied","Data":"4e7d19ecfc8d3734356a0832721b8ff789bad3f1d623fbc7262aab81f59906fc"} Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.653117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerStarted","Data":"c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c"} Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.737346 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cceca9f-0dae-4298-b495-2c2e09e6e63d" path="/var/lib/kubelet/pods/1cceca9f-0dae-4298-b495-2c2e09e6e63d/volumes" Mar 07 08:15:10 crc kubenswrapper[4761]: I0307 08:15:10.672295 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ff986583-4706-47fa-9fec-eb503de7cac1","Type":"ContainerStarted","Data":"0cafabde7f9061fd5ddec812833139f481d9698cae73c9af9d5be94bffbef661"} Mar 07 08:15:10 crc kubenswrapper[4761]: I0307 08:15:10.672789 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ff986583-4706-47fa-9fec-eb503de7cac1","Type":"ContainerStarted","Data":"99bef01e612d3b988be818139917bf4a9f445c5128003237e591fa27497f3a6b"} Mar 07 08:15:10 crc kubenswrapper[4761]: I0307 08:15:10.681878 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerStarted","Data":"7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142"} Mar 07 08:15:10 crc kubenswrapper[4761]: I0307 08:15:10.699084 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.699064737 podStartE2EDuration="2.699064737s" podCreationTimestamp="2026-03-07 08:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:10.694680428 +0000 UTC m=+1567.603846903" watchObservedRunningTime="2026-03-07 08:15:10.699064737 +0000 UTC m=+1567.608231212" Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.675594 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.677143 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.679233 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.679988 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.733489 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerStarted","Data":"bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a"} Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.733536 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.733568 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.761474 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.102757928 podStartE2EDuration="5.761455057s" podCreationTimestamp="2026-03-07 08:15:06 +0000 UTC" firstStartedPulling="2026-03-07 08:15:07.66626608 +0000 UTC m=+1564.575432555" lastFinishedPulling="2026-03-07 08:15:11.324963199 +0000 UTC m=+1568.234129684" observedRunningTime="2026-03-07 08:15:11.748873643 +0000 UTC m=+1568.658040138" watchObservedRunningTime="2026-03-07 08:15:11.761455057 +0000 UTC m=+1568.670621532" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.082957 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dl87j"] Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.085883 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.115763 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dl87j"] Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.119782 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.119833 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.119871 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.119924 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kbd2\" (UniqueName: \"kubernetes.io/projected/17b567eb-878f-4cb2-9da6-7d04193f02e7-kube-api-access-4kbd2\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.120009 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-config\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.120040 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.222011 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.222074 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.222112 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.222168 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kbd2\" (UniqueName: \"kubernetes.io/projected/17b567eb-878f-4cb2-9da6-7d04193f02e7-kube-api-access-4kbd2\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.222254 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-config\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.222288 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.223047 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.225390 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.225923 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.226632 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.226642 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-config\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.256456 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kbd2\" (UniqueName: \"kubernetes.io/projected/17b567eb-878f-4cb2-9da6-7d04193f02e7-kube-api-access-4kbd2\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.437909 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.745262 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.066459 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dl87j"] Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.764883 4761 generic.go:334] "Generic (PLEG): container finished" podID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerID="c9434e396ec8273fc0ff635acc03c308492a76b2cb653926f6ce7a0fb4bf25ef" exitCode=0 Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.767915 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" event={"ID":"17b567eb-878f-4cb2-9da6-7d04193f02e7","Type":"ContainerDied","Data":"c9434e396ec8273fc0ff635acc03c308492a76b2cb653926f6ce7a0fb4bf25ef"} Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.767988 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" event={"ID":"17b567eb-878f-4cb2-9da6-7d04193f02e7","Type":"ContainerStarted","Data":"8caa1dca21d992e48acf15843168d308bfc2d2443ea50cbda5239b58c25dbe0b"} Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.782620 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.782676 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.782729 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.783440 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"884da56902d61ce2a23842311611c1facb0e638b212880b855a9c7825ef51b45"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.783500 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://884da56902d61ce2a23842311611c1facb0e638b212880b855a9c7825ef51b45" gracePeriod=600 Mar 07 08:15:13 crc kubenswrapper[4761]: E0307 08:15:13.808197 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.078477 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.622838 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.775392 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" event={"ID":"17b567eb-878f-4cb2-9da6-7d04193f02e7","Type":"ContainerStarted","Data":"6b8d401dab7334c08e66ac3f5216b07310afe3106177b3008889e75b361dfdf4"} Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.775540 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.778147 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="884da56902d61ce2a23842311611c1facb0e638b212880b855a9c7825ef51b45" exitCode=0 Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.778218 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"884da56902d61ce2a23842311611c1facb0e638b212880b855a9c7825ef51b45"} Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.778265 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806"} Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.778286 4761 scope.go:117] "RemoveContainer" containerID="c720defb28c06a1aa2b8b26acca0b7c32fc87b6223c85d1c22d3f2b9565b9ee4" Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.778371 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-log" containerID="cri-o://7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4" gracePeriod=30 Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.778423 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-api" containerID="cri-o://ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44" gracePeriod=30 Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.801595 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" podStartSLOduration=2.801573296 podStartE2EDuration="2.801573296s" podCreationTimestamp="2026-03-07 08:15:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:14.797225187 +0000 UTC m=+1571.706391682" watchObservedRunningTime="2026-03-07 08:15:14.801573296 +0000 UTC m=+1571.710739771" Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.498832 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.500245 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-central-agent" containerID="cri-o://4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9" gracePeriod=30 Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.500292 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="sg-core" containerID="cri-o://7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142" gracePeriod=30 Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.500347 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-notification-agent" containerID="cri-o://c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c" gracePeriod=30 Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.500342 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="proxy-httpd" containerID="cri-o://bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a" gracePeriod=30 Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.801689 4761 generic.go:334] "Generic (PLEG): container finished" podID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerID="bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a" exitCode=0 Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.801977 4761 generic.go:334] "Generic (PLEG): container finished" podID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerID="7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142" exitCode=2 Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.801862 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerDied","Data":"bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a"} Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.802062 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerDied","Data":"7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142"} Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.805652 4761 generic.go:334] "Generic (PLEG): container finished" podID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerID="7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4" exitCode=143 Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.805728 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef52f146-bda3-462c-9d12-7f3200a1161b","Type":"ContainerDied","Data":"7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4"} Mar 07 08:15:16 crc kubenswrapper[4761]: I0307 08:15:16.822471 4761 generic.go:334] "Generic (PLEG): container finished" podID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerID="c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c" exitCode=0 Mar 07 08:15:16 crc kubenswrapper[4761]: I0307 08:15:16.822577 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerDied","Data":"c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c"} Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.349446 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.487875 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-run-httpd\") pod \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.487926 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-combined-ca-bundle\") pod \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.488024 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-scripts\") pod \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.488109 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-log-httpd\") pod \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.488221 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxx6c\" (UniqueName: \"kubernetes.io/projected/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-kube-api-access-rxx6c\") pod \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.488248 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-config-data\") pod \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.488272 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-sg-core-conf-yaml\") pod \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.490245 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f9506ccb-7b48-4936-ad2a-ddfb47bd804b" (UID: "f9506ccb-7b48-4936-ad2a-ddfb47bd804b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.490377 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f9506ccb-7b48-4936-ad2a-ddfb47bd804b" (UID: "f9506ccb-7b48-4936-ad2a-ddfb47bd804b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.495743 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-kube-api-access-rxx6c" (OuterVolumeSpecName: "kube-api-access-rxx6c") pod "f9506ccb-7b48-4936-ad2a-ddfb47bd804b" (UID: "f9506ccb-7b48-4936-ad2a-ddfb47bd804b"). InnerVolumeSpecName "kube-api-access-rxx6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.495870 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-scripts" (OuterVolumeSpecName: "scripts") pod "f9506ccb-7b48-4936-ad2a-ddfb47bd804b" (UID: "f9506ccb-7b48-4936-ad2a-ddfb47bd804b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.536371 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f9506ccb-7b48-4936-ad2a-ddfb47bd804b" (UID: "f9506ccb-7b48-4936-ad2a-ddfb47bd804b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.591003 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.591045 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.591055 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.591064 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxx6c\" (UniqueName: \"kubernetes.io/projected/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-kube-api-access-rxx6c\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.591074 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.617701 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9506ccb-7b48-4936-ad2a-ddfb47bd804b" (UID: "f9506ccb-7b48-4936-ad2a-ddfb47bd804b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.637992 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-config-data" (OuterVolumeSpecName: "config-data") pod "f9506ccb-7b48-4936-ad2a-ddfb47bd804b" (UID: "f9506ccb-7b48-4936-ad2a-ddfb47bd804b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.694090 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.694150 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.842021 4761 generic.go:334] "Generic (PLEG): container finished" podID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerID="4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9" exitCode=0 Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.842070 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerDied","Data":"4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9"} Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.842099 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerDied","Data":"ea042ceae08044af6a3f124f836f1b6e9fc3f0772a9090100be6273d6eae324e"} Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.842119 4761 scope.go:117] "RemoveContainer" containerID="bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.842131 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.930873 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.942442 4761 scope.go:117] "RemoveContainer" containerID="7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.972565 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.985600 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:17 crc kubenswrapper[4761]: E0307 08:15:17.986333 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-notification-agent" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986358 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-notification-agent" Mar 07 08:15:17 crc kubenswrapper[4761]: E0307 08:15:17.986385 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="sg-core" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986394 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="sg-core" Mar 07 08:15:17 crc kubenswrapper[4761]: E0307 08:15:17.986427 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-central-agent" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986435 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-central-agent" Mar 07 08:15:17 crc kubenswrapper[4761]: E0307 08:15:17.986469 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="proxy-httpd" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986477 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="proxy-httpd" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986853 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="proxy-httpd" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986880 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-central-agent" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986908 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-notification-agent" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986924 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="sg-core" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.989532 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.991874 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.993649 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.007227 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.040472 4761 scope.go:117] "RemoveContainer" containerID="c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.094929 4761 scope.go:117] "RemoveContainer" containerID="4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.103942 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-log-httpd\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.104022 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.104172 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76z9f\" (UniqueName: \"kubernetes.io/projected/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-kube-api-access-76z9f\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.104259 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-config-data\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.104376 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.104477 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-scripts\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.104653 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-run-httpd\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.124043 4761 scope.go:117] "RemoveContainer" containerID="bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a" Mar 07 08:15:18 crc kubenswrapper[4761]: E0307 08:15:18.124414 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a\": container with ID starting with bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a not found: ID does not exist" containerID="bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.124451 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a"} err="failed to get container status \"bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a\": rpc error: code = NotFound desc = could not find container \"bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a\": container with ID starting with bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a not found: ID does not exist" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.124477 4761 scope.go:117] "RemoveContainer" containerID="7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142" Mar 07 08:15:18 crc kubenswrapper[4761]: E0307 08:15:18.124851 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142\": container with ID starting with 7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142 not found: ID does not exist" containerID="7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.124890 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142"} err="failed to get container status \"7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142\": rpc error: code = NotFound desc = could not find container \"7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142\": container with ID starting with 7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142 not found: ID does not exist" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.124912 4761 scope.go:117] "RemoveContainer" containerID="c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c" Mar 07 08:15:18 crc kubenswrapper[4761]: E0307 08:15:18.125134 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c\": container with ID starting with c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c not found: ID does not exist" containerID="c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.125158 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c"} err="failed to get container status \"c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c\": rpc error: code = NotFound desc = could not find container \"c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c\": container with ID starting with c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c not found: ID does not exist" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.125175 4761 scope.go:117] "RemoveContainer" containerID="4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9" Mar 07 08:15:18 crc kubenswrapper[4761]: E0307 08:15:18.125420 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9\": container with ID starting with 4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9 not found: ID does not exist" containerID="4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.125440 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9"} err="failed to get container status \"4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9\": rpc error: code = NotFound desc = could not find container \"4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9\": container with ID starting with 4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9 not found: ID does not exist" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.207433 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-log-httpd\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.207526 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.207568 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76z9f\" (UniqueName: \"kubernetes.io/projected/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-kube-api-access-76z9f\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.207602 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-config-data\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.207638 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.207686 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-scripts\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.207792 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-run-httpd\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.208133 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-log-httpd\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.208234 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-run-httpd\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.213362 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-scripts\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.215448 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-config-data\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.217362 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.230706 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.231477 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76z9f\" (UniqueName: \"kubernetes.io/projected/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-kube-api-access-76z9f\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: E0307 08:15:18.334562 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.384618 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.471577 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.619360 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-combined-ca-bundle\") pod \"ef52f146-bda3-462c-9d12-7f3200a1161b\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.619529 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmdjm\" (UniqueName: \"kubernetes.io/projected/ef52f146-bda3-462c-9d12-7f3200a1161b-kube-api-access-mmdjm\") pod \"ef52f146-bda3-462c-9d12-7f3200a1161b\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.619567 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef52f146-bda3-462c-9d12-7f3200a1161b-logs\") pod \"ef52f146-bda3-462c-9d12-7f3200a1161b\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.619615 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-config-data\") pod \"ef52f146-bda3-462c-9d12-7f3200a1161b\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.620783 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef52f146-bda3-462c-9d12-7f3200a1161b-logs" (OuterVolumeSpecName: "logs") pod "ef52f146-bda3-462c-9d12-7f3200a1161b" (UID: "ef52f146-bda3-462c-9d12-7f3200a1161b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.627219 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef52f146-bda3-462c-9d12-7f3200a1161b-kube-api-access-mmdjm" (OuterVolumeSpecName: "kube-api-access-mmdjm") pod "ef52f146-bda3-462c-9d12-7f3200a1161b" (UID: "ef52f146-bda3-462c-9d12-7f3200a1161b"). InnerVolumeSpecName "kube-api-access-mmdjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.657992 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-config-data" (OuterVolumeSpecName: "config-data") pod "ef52f146-bda3-462c-9d12-7f3200a1161b" (UID: "ef52f146-bda3-462c-9d12-7f3200a1161b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.690851 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef52f146-bda3-462c-9d12-7f3200a1161b" (UID: "ef52f146-bda3-462c-9d12-7f3200a1161b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.722855 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.722896 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmdjm\" (UniqueName: \"kubernetes.io/projected/ef52f146-bda3-462c-9d12-7f3200a1161b-kube-api-access-mmdjm\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.722911 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef52f146-bda3-462c-9d12-7f3200a1161b-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.722923 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.864489 4761 generic.go:334] "Generic (PLEG): container finished" podID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerID="ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44" exitCode=0 Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.864528 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef52f146-bda3-462c-9d12-7f3200a1161b","Type":"ContainerDied","Data":"ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44"} Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.864550 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef52f146-bda3-462c-9d12-7f3200a1161b","Type":"ContainerDied","Data":"38c8e87693ca31cdfaa2df02df9f5f7dbd3ac9a1d760025ec704d4f8e0d85068"} Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.864567 4761 scope.go:117] "RemoveContainer" containerID="ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.864676 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.924853 4761 scope.go:117] "RemoveContainer" containerID="7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4" Mar 07 08:15:18 crc kubenswrapper[4761]: W0307 08:15:18.930197 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27e0eb4e_cf40_4edc_aa40_d90412b78ad7.slice/crio-2d73a155caa3071bba582126bc91455e8983f20c114de33421479dadadcdca21 WatchSource:0}: Error finding container 2d73a155caa3071bba582126bc91455e8983f20c114de33421479dadadcdca21: Status 404 returned error can't find the container with id 2d73a155caa3071bba582126bc91455e8983f20c114de33421479dadadcdca21 Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.936452 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.972768 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.995810 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.015421 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:19 crc kubenswrapper[4761]: E0307 08:15:19.015996 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-log" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.016016 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-log" Mar 07 08:15:19 crc kubenswrapper[4761]: E0307 08:15:19.016093 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-api" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.016103 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-api" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.016347 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-api" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.016378 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-log" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.017452 4761 scope.go:117] "RemoveContainer" containerID="ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44" Mar 07 08:15:19 crc kubenswrapper[4761]: E0307 08:15:19.019588 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44\": container with ID starting with ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44 not found: ID does not exist" containerID="ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.019629 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44"} err="failed to get container status \"ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44\": rpc error: code = NotFound desc = could not find container \"ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44\": container with ID starting with ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44 not found: ID does not exist" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.019656 4761 scope.go:117] "RemoveContainer" containerID="7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4" Mar 07 08:15:19 crc kubenswrapper[4761]: E0307 08:15:19.020343 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4\": container with ID starting with 7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4 not found: ID does not exist" containerID="7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.020376 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4"} err="failed to get container status \"7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4\": rpc error: code = NotFound desc = could not find container \"7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4\": container with ID starting with 7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4 not found: ID does not exist" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.022864 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.025670 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.025670 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.026444 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.032080 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.079804 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.100040 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.137681 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b56x9\" (UniqueName: \"kubernetes.io/projected/84207154-36c3-4462-974a-9ad6ac33a552-kube-api-access-b56x9\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.137785 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84207154-36c3-4462-974a-9ad6ac33a552-logs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.137832 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.137903 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-internal-tls-certs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.137931 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-public-tls-certs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.138261 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-config-data\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.240473 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b56x9\" (UniqueName: \"kubernetes.io/projected/84207154-36c3-4462-974a-9ad6ac33a552-kube-api-access-b56x9\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.240560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84207154-36c3-4462-974a-9ad6ac33a552-logs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.240592 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.240617 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-internal-tls-certs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.240636 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-public-tls-certs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.240736 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-config-data\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.241366 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84207154-36c3-4462-974a-9ad6ac33a552-logs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.249591 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-public-tls-certs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.249605 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-config-data\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.250075 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.253549 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-internal-tls-certs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.259265 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b56x9\" (UniqueName: \"kubernetes.io/projected/84207154-36c3-4462-974a-9ad6ac33a552-kube-api-access-b56x9\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.369626 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.740390 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" path="/var/lib/kubelet/pods/ef52f146-bda3-462c-9d12-7f3200a1161b/volumes" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.741677 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" path="/var/lib/kubelet/pods/f9506ccb-7b48-4936-ad2a-ddfb47bd804b/volumes" Mar 07 08:15:19 crc kubenswrapper[4761]: W0307 08:15:19.871751 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84207154_36c3_4462_974a_9ad6ac33a552.slice/crio-599ce1ac11fd55789e12a3eec3da7221320f8c5c003f978248038fa4e5ea16de WatchSource:0}: Error finding container 599ce1ac11fd55789e12a3eec3da7221320f8c5c003f978248038fa4e5ea16de: Status 404 returned error can't find the container with id 599ce1ac11fd55789e12a3eec3da7221320f8c5c003f978248038fa4e5ea16de Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.874515 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.880746 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerStarted","Data":"42ec41a29efcc250aea778b070c1f73c664bcc94b85a43a129c7768c52da4fad"} Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.880792 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerStarted","Data":"2d73a155caa3071bba582126bc91455e8983f20c114de33421479dadadcdca21"} Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.900409 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.092977 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-rrf49"] Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.100978 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.105747 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.105891 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.133736 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rrf49"] Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.160898 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-config-data\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.160987 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-scripts\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.161142 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mlbc\" (UniqueName: \"kubernetes.io/projected/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-kube-api-access-9mlbc\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.161280 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.263609 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-config-data\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.263660 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-scripts\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.263821 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mlbc\" (UniqueName: \"kubernetes.io/projected/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-kube-api-access-9mlbc\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.263944 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.279198 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-scripts\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.279685 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-config-data\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.280237 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.288265 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mlbc\" (UniqueName: \"kubernetes.io/projected/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-kube-api-access-9mlbc\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.525140 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.920194 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84207154-36c3-4462-974a-9ad6ac33a552","Type":"ContainerStarted","Data":"1bb9ea1694b5c1b99ce123d84c9e006882409ce03bf0e86a8ca611cbf4e2ccb5"} Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.920487 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84207154-36c3-4462-974a-9ad6ac33a552","Type":"ContainerStarted","Data":"6938de58a8382ee7e59e4bcd1550bee838a5a3a7c6813dc3d55a01ae08849c78"} Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.920502 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84207154-36c3-4462-974a-9ad6ac33a552","Type":"ContainerStarted","Data":"599ce1ac11fd55789e12a3eec3da7221320f8c5c003f978248038fa4e5ea16de"} Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.929243 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerStarted","Data":"65b25fb5e9d89c0452608d4caa0eda40d046ec312401e66e3f11d7503dbeb516"} Mar 07 08:15:21 crc kubenswrapper[4761]: I0307 08:15:21.140160 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.140139559 podStartE2EDuration="3.140139559s" podCreationTimestamp="2026-03-07 08:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:20.950864077 +0000 UTC m=+1577.860030552" watchObservedRunningTime="2026-03-07 08:15:21.140139559 +0000 UTC m=+1578.049306034" Mar 07 08:15:21 crc kubenswrapper[4761]: I0307 08:15:21.177651 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rrf49"] Mar 07 08:15:21 crc kubenswrapper[4761]: I0307 08:15:21.940529 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rrf49" event={"ID":"a7a46a5d-0880-4af9-a48f-3599f8b1dea7","Type":"ContainerStarted","Data":"e04c2b95dad8241d3b28cfd6ddafa5597a39cc35f1df65e0fd0a09feec72001e"} Mar 07 08:15:21 crc kubenswrapper[4761]: I0307 08:15:21.940776 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rrf49" event={"ID":"a7a46a5d-0880-4af9-a48f-3599f8b1dea7","Type":"ContainerStarted","Data":"bfa9905bae5ecb9376fe10182cb9ed8e925754d76728e09eea9bbe0303a0a897"} Mar 07 08:15:21 crc kubenswrapper[4761]: I0307 08:15:21.943656 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerStarted","Data":"7bec2a6b9f93d88e6175882138e628f39fffb24361734adf393186dbc436254e"} Mar 07 08:15:21 crc kubenswrapper[4761]: I0307 08:15:21.966271 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-rrf49" podStartSLOduration=1.966251664 podStartE2EDuration="1.966251664s" podCreationTimestamp="2026-03-07 08:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:21.957605179 +0000 UTC m=+1578.866771654" watchObservedRunningTime="2026-03-07 08:15:21.966251664 +0000 UTC m=+1578.875418139" Mar 07 08:15:22 crc kubenswrapper[4761]: I0307 08:15:22.439570 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:22 crc kubenswrapper[4761]: I0307 08:15:22.511887 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vdbwn"] Mar 07 08:15:22 crc kubenswrapper[4761]: I0307 08:15:22.512160 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" podUID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerName="dnsmasq-dns" containerID="cri-o://58a897c9e680fbc80c648ba02291c7e229e45e6c318e29a514b872744bbb65c0" gracePeriod=10 Mar 07 08:15:22 crc kubenswrapper[4761]: I0307 08:15:22.962034 4761 generic.go:334] "Generic (PLEG): container finished" podID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerID="58a897c9e680fbc80c648ba02291c7e229e45e6c318e29a514b872744bbb65c0" exitCode=0 Mar 07 08:15:22 crc kubenswrapper[4761]: I0307 08:15:22.962264 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" event={"ID":"bc9f56df-ded6-4d8a-8075-645d640f6b5f","Type":"ContainerDied","Data":"58a897c9e680fbc80c648ba02291c7e229e45e6c318e29a514b872744bbb65c0"} Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.170408 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.280643 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzpw7\" (UniqueName: \"kubernetes.io/projected/bc9f56df-ded6-4d8a-8075-645d640f6b5f-kube-api-access-xzpw7\") pod \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.280741 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-config\") pod \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.280778 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-svc\") pod \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.280841 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-swift-storage-0\") pod \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.280938 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-nb\") pod \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.280996 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-sb\") pod \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.302987 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9f56df-ded6-4d8a-8075-645d640f6b5f-kube-api-access-xzpw7" (OuterVolumeSpecName: "kube-api-access-xzpw7") pod "bc9f56df-ded6-4d8a-8075-645d640f6b5f" (UID: "bc9f56df-ded6-4d8a-8075-645d640f6b5f"). InnerVolumeSpecName "kube-api-access-xzpw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.351702 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc9f56df-ded6-4d8a-8075-645d640f6b5f" (UID: "bc9f56df-ded6-4d8a-8075-645d640f6b5f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.383406 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc9f56df-ded6-4d8a-8075-645d640f6b5f" (UID: "bc9f56df-ded6-4d8a-8075-645d640f6b5f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.384498 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.384522 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzpw7\" (UniqueName: \"kubernetes.io/projected/bc9f56df-ded6-4d8a-8075-645d640f6b5f-kube-api-access-xzpw7\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.384532 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.389883 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc9f56df-ded6-4d8a-8075-645d640f6b5f" (UID: "bc9f56df-ded6-4d8a-8075-645d640f6b5f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.393350 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc9f56df-ded6-4d8a-8075-645d640f6b5f" (UID: "bc9f56df-ded6-4d8a-8075-645d640f6b5f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.395912 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-config" (OuterVolumeSpecName: "config") pod "bc9f56df-ded6-4d8a-8075-645d640f6b5f" (UID: "bc9f56df-ded6-4d8a-8075-645d640f6b5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.486390 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.486424 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.486438 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.976597 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" event={"ID":"bc9f56df-ded6-4d8a-8075-645d640f6b5f","Type":"ContainerDied","Data":"1266d5c0ea10cbc1c7d4f4cb004228b7709c180fef5b782ffe5b05b34b696ff6"} Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.976974 4761 scope.go:117] "RemoveContainer" containerID="58a897c9e680fbc80c648ba02291c7e229e45e6c318e29a514b872744bbb65c0" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.976653 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:15:24 crc kubenswrapper[4761]: I0307 08:15:24.133384 4761 scope.go:117] "RemoveContainer" containerID="3ada3d87e766661465ef73a62b9cc99eb8c306100d63ce1c417917c314038b0c" Mar 07 08:15:24 crc kubenswrapper[4761]: I0307 08:15:24.161207 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vdbwn"] Mar 07 08:15:24 crc kubenswrapper[4761]: I0307 08:15:24.185359 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vdbwn"] Mar 07 08:15:24 crc kubenswrapper[4761]: I0307 08:15:24.992399 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerStarted","Data":"c9c45ba443109a6b8904801f4882aa5d094d9ac1223032ff87930bb525a6b320"} Mar 07 08:15:24 crc kubenswrapper[4761]: I0307 08:15:24.992670 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:15:25 crc kubenswrapper[4761]: I0307 08:15:25.027129 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.029352072 podStartE2EDuration="8.027103451s" podCreationTimestamp="2026-03-07 08:15:17 +0000 UTC" firstStartedPulling="2026-03-07 08:15:18.935252822 +0000 UTC m=+1575.844419297" lastFinishedPulling="2026-03-07 08:15:23.933004191 +0000 UTC m=+1580.842170676" observedRunningTime="2026-03-07 08:15:25.015756168 +0000 UTC m=+1581.924922673" watchObservedRunningTime="2026-03-07 08:15:25.027103451 +0000 UTC m=+1581.936269946" Mar 07 08:15:25 crc kubenswrapper[4761]: I0307 08:15:25.726981 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" path="/var/lib/kubelet/pods/bc9f56df-ded6-4d8a-8075-645d640f6b5f/volumes" Mar 07 08:15:27 crc kubenswrapper[4761]: I0307 08:15:27.025596 4761 generic.go:334] "Generic (PLEG): container finished" podID="a7a46a5d-0880-4af9-a48f-3599f8b1dea7" containerID="e04c2b95dad8241d3b28cfd6ddafa5597a39cc35f1df65e0fd0a09feec72001e" exitCode=0 Mar 07 08:15:27 crc kubenswrapper[4761]: I0307 08:15:27.025776 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rrf49" event={"ID":"a7a46a5d-0880-4af9-a48f-3599f8b1dea7","Type":"ContainerDied","Data":"e04c2b95dad8241d3b28cfd6ddafa5597a39cc35f1df65e0fd0a09feec72001e"} Mar 07 08:15:28 crc kubenswrapper[4761]: E0307 08:15:28.605489 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:28 crc kubenswrapper[4761]: E0307 08:15:28.605741 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.652232 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.726598 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-combined-ca-bundle\") pod \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.726800 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mlbc\" (UniqueName: \"kubernetes.io/projected/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-kube-api-access-9mlbc\") pod \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.726858 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-config-data\") pod \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.727016 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-scripts\") pod \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.735222 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-scripts" (OuterVolumeSpecName: "scripts") pod "a7a46a5d-0880-4af9-a48f-3599f8b1dea7" (UID: "a7a46a5d-0880-4af9-a48f-3599f8b1dea7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.750567 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-kube-api-access-9mlbc" (OuterVolumeSpecName: "kube-api-access-9mlbc") pod "a7a46a5d-0880-4af9-a48f-3599f8b1dea7" (UID: "a7a46a5d-0880-4af9-a48f-3599f8b1dea7"). InnerVolumeSpecName "kube-api-access-9mlbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.769250 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7a46a5d-0880-4af9-a48f-3599f8b1dea7" (UID: "a7a46a5d-0880-4af9-a48f-3599f8b1dea7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.770393 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-config-data" (OuterVolumeSpecName: "config-data") pod "a7a46a5d-0880-4af9-a48f-3599f8b1dea7" (UID: "a7a46a5d-0880-4af9-a48f-3599f8b1dea7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.829958 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.829999 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.830015 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mlbc\" (UniqueName: \"kubernetes.io/projected/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-kube-api-access-9mlbc\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.830026 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.102418 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rrf49" event={"ID":"a7a46a5d-0880-4af9-a48f-3599f8b1dea7","Type":"ContainerDied","Data":"bfa9905bae5ecb9376fe10182cb9ed8e925754d76728e09eea9bbe0303a0a897"} Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.102463 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfa9905bae5ecb9376fe10182cb9ed8e925754d76728e09eea9bbe0303a0a897" Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.102484 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.233630 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.233907 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-log" containerID="cri-o://6938de58a8382ee7e59e4bcd1550bee838a5a3a7c6813dc3d55a01ae08849c78" gracePeriod=30 Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.233982 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-api" containerID="cri-o://1bb9ea1694b5c1b99ce123d84c9e006882409ce03bf0e86a8ca611cbf4e2ccb5" gracePeriod=30 Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.256869 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.257090 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="83e95e07-cc49-4e75-a0e9-0299705fc32a" containerName="nova-scheduler-scheduler" containerID="cri-o://73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a" gracePeriod=30 Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.296233 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.296818 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-log" containerID="cri-o://f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110" gracePeriod=30 Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.296958 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-metadata" containerID="cri-o://5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49" gracePeriod=30 Mar 07 08:15:29 crc kubenswrapper[4761]: E0307 08:15:29.573609 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 07 08:15:29 crc kubenswrapper[4761]: E0307 08:15:29.575094 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 07 08:15:29 crc kubenswrapper[4761]: E0307 08:15:29.576241 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 07 08:15:29 crc kubenswrapper[4761]: E0307 08:15:29.576286 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="83e95e07-cc49-4e75-a0e9-0299705fc32a" containerName="nova-scheduler-scheduler" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.123851 4761 generic.go:334] "Generic (PLEG): container finished" podID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerID="f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110" exitCode=143 Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.123895 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8661e6f-7759-475f-8964-bae1b8cfebbe","Type":"ContainerDied","Data":"f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110"} Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.128180 4761 generic.go:334] "Generic (PLEG): container finished" podID="84207154-36c3-4462-974a-9ad6ac33a552" containerID="1bb9ea1694b5c1b99ce123d84c9e006882409ce03bf0e86a8ca611cbf4e2ccb5" exitCode=0 Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.128205 4761 generic.go:334] "Generic (PLEG): container finished" podID="84207154-36c3-4462-974a-9ad6ac33a552" containerID="6938de58a8382ee7e59e4bcd1550bee838a5a3a7c6813dc3d55a01ae08849c78" exitCode=143 Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.128226 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84207154-36c3-4462-974a-9ad6ac33a552","Type":"ContainerDied","Data":"1bb9ea1694b5c1b99ce123d84c9e006882409ce03bf0e86a8ca611cbf4e2ccb5"} Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.128251 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84207154-36c3-4462-974a-9ad6ac33a552","Type":"ContainerDied","Data":"6938de58a8382ee7e59e4bcd1550bee838a5a3a7c6813dc3d55a01ae08849c78"} Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.423252 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.474447 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b56x9\" (UniqueName: \"kubernetes.io/projected/84207154-36c3-4462-974a-9ad6ac33a552-kube-api-access-b56x9\") pod \"84207154-36c3-4462-974a-9ad6ac33a552\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.474509 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84207154-36c3-4462-974a-9ad6ac33a552-logs\") pod \"84207154-36c3-4462-974a-9ad6ac33a552\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.474632 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-config-data\") pod \"84207154-36c3-4462-974a-9ad6ac33a552\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.474656 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-internal-tls-certs\") pod \"84207154-36c3-4462-974a-9ad6ac33a552\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.474739 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-public-tls-certs\") pod \"84207154-36c3-4462-974a-9ad6ac33a552\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.474790 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-combined-ca-bundle\") pod \"84207154-36c3-4462-974a-9ad6ac33a552\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.475959 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84207154-36c3-4462-974a-9ad6ac33a552-logs" (OuterVolumeSpecName: "logs") pod "84207154-36c3-4462-974a-9ad6ac33a552" (UID: "84207154-36c3-4462-974a-9ad6ac33a552"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.493852 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84207154-36c3-4462-974a-9ad6ac33a552-kube-api-access-b56x9" (OuterVolumeSpecName: "kube-api-access-b56x9") pod "84207154-36c3-4462-974a-9ad6ac33a552" (UID: "84207154-36c3-4462-974a-9ad6ac33a552"). InnerVolumeSpecName "kube-api-access-b56x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.509989 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84207154-36c3-4462-974a-9ad6ac33a552" (UID: "84207154-36c3-4462-974a-9ad6ac33a552"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.516445 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-config-data" (OuterVolumeSpecName: "config-data") pod "84207154-36c3-4462-974a-9ad6ac33a552" (UID: "84207154-36c3-4462-974a-9ad6ac33a552"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.549923 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "84207154-36c3-4462-974a-9ad6ac33a552" (UID: "84207154-36c3-4462-974a-9ad6ac33a552"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.554993 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "84207154-36c3-4462-974a-9ad6ac33a552" (UID: "84207154-36c3-4462-974a-9ad6ac33a552"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.577768 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b56x9\" (UniqueName: \"kubernetes.io/projected/84207154-36c3-4462-974a-9ad6ac33a552-kube-api-access-b56x9\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.577801 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84207154-36c3-4462-974a-9ad6ac33a552-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.577816 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.577830 4761 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.577840 4761 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.577852 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.146938 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84207154-36c3-4462-974a-9ad6ac33a552","Type":"ContainerDied","Data":"599ce1ac11fd55789e12a3eec3da7221320f8c5c003f978248038fa4e5ea16de"} Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.147951 4761 scope.go:117] "RemoveContainer" containerID="1bb9ea1694b5c1b99ce123d84c9e006882409ce03bf0e86a8ca611cbf4e2ccb5" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.148374 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.174353 4761 scope.go:117] "RemoveContainer" containerID="6938de58a8382ee7e59e4bcd1550bee838a5a3a7c6813dc3d55a01ae08849c78" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.217913 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.229570 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.240532 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:31 crc kubenswrapper[4761]: E0307 08:15:31.241216 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-api" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241235 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-api" Mar 07 08:15:31 crc kubenswrapper[4761]: E0307 08:15:31.241260 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-log" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241266 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-log" Mar 07 08:15:31 crc kubenswrapper[4761]: E0307 08:15:31.241285 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerName="dnsmasq-dns" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241291 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerName="dnsmasq-dns" Mar 07 08:15:31 crc kubenswrapper[4761]: E0307 08:15:31.241314 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a46a5d-0880-4af9-a48f-3599f8b1dea7" containerName="nova-manage" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241321 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a46a5d-0880-4af9-a48f-3599f8b1dea7" containerName="nova-manage" Mar 07 08:15:31 crc kubenswrapper[4761]: E0307 08:15:31.241336 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerName="init" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241342 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerName="init" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241567 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerName="dnsmasq-dns" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241585 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-api" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241597 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a46a5d-0880-4af9-a48f-3599f8b1dea7" containerName="nova-manage" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241616 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-log" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.242875 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.245079 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.245125 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.248356 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.257074 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.301040 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-config-data\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.301200 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqnnj\" (UniqueName: \"kubernetes.io/projected/c12aff9a-a09d-4da9-8a3d-d59591060f22-kube-api-access-bqnnj\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.301273 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.301293 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.301313 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c12aff9a-a09d-4da9-8a3d-d59591060f22-logs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.301342 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-public-tls-certs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.403199 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-config-data\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.403374 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqnnj\" (UniqueName: \"kubernetes.io/projected/c12aff9a-a09d-4da9-8a3d-d59591060f22-kube-api-access-bqnnj\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.403761 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.403784 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.403805 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c12aff9a-a09d-4da9-8a3d-d59591060f22-logs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.403832 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-public-tls-certs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.408495 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-public-tls-certs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.409112 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c12aff9a-a09d-4da9-8a3d-d59591060f22-logs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.410112 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-config-data\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.412086 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.414032 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.425703 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqnnj\" (UniqueName: \"kubernetes.io/projected/c12aff9a-a09d-4da9-8a3d-d59591060f22-kube-api-access-bqnnj\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.702545 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.730606 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84207154-36c3-4462-974a-9ad6ac33a552" path="/var/lib/kubelet/pods/84207154-36c3-4462-974a-9ad6ac33a552/volumes" Mar 07 08:15:32 crc kubenswrapper[4761]: I0307 08:15:32.176697 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:32 crc kubenswrapper[4761]: W0307 08:15:32.176991 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc12aff9a_a09d_4da9_8a3d_d59591060f22.slice/crio-fe093db2f867660c1a9fe8a75ecd63acf59d8ca3b63f17e292709334ef452231 WatchSource:0}: Error finding container fe093db2f867660c1a9fe8a75ecd63acf59d8ca3b63f17e292709334ef452231: Status 404 returned error can't find the container with id fe093db2f867660c1a9fe8a75ecd63acf59d8ca3b63f17e292709334ef452231 Mar 07 08:15:32 crc kubenswrapper[4761]: I0307 08:15:32.428379 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.0:8775/\": read tcp 10.217.0.2:58222->10.217.1.0:8775: read: connection reset by peer" Mar 07 08:15:32 crc kubenswrapper[4761]: I0307 08:15:32.429007 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.0:8775/\": read tcp 10.217.0.2:58236->10.217.1.0:8775: read: connection reset by peer" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.078201 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.145922 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs\") pod \"d8661e6f-7759-475f-8964-bae1b8cfebbe\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.145983 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-config-data\") pod \"d8661e6f-7759-475f-8964-bae1b8cfebbe\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.146091 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-combined-ca-bundle\") pod \"d8661e6f-7759-475f-8964-bae1b8cfebbe\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.146139 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5bw5\" (UniqueName: \"kubernetes.io/projected/d8661e6f-7759-475f-8964-bae1b8cfebbe-kube-api-access-k5bw5\") pod \"d8661e6f-7759-475f-8964-bae1b8cfebbe\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.146197 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8661e6f-7759-475f-8964-bae1b8cfebbe-logs\") pod \"d8661e6f-7759-475f-8964-bae1b8cfebbe\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.147355 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8661e6f-7759-475f-8964-bae1b8cfebbe-logs" (OuterVolumeSpecName: "logs") pod "d8661e6f-7759-475f-8964-bae1b8cfebbe" (UID: "d8661e6f-7759-475f-8964-bae1b8cfebbe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.177621 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c12aff9a-a09d-4da9-8a3d-d59591060f22","Type":"ContainerStarted","Data":"6cad3273fda38655c29981cd79857babbb3c2f01e3b248b58cb571d4c3883256"} Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.177677 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c12aff9a-a09d-4da9-8a3d-d59591060f22","Type":"ContainerStarted","Data":"c2d8bc3b14afa4040c4e70fd04c899e0153190d41201aa063f5796ad79ec92f5"} Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.177690 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c12aff9a-a09d-4da9-8a3d-d59591060f22","Type":"ContainerStarted","Data":"fe093db2f867660c1a9fe8a75ecd63acf59d8ca3b63f17e292709334ef452231"} Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.184723 4761 generic.go:334] "Generic (PLEG): container finished" podID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerID="5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49" exitCode=0 Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.184765 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8661e6f-7759-475f-8964-bae1b8cfebbe","Type":"ContainerDied","Data":"5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49"} Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.184791 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8661e6f-7759-475f-8964-bae1b8cfebbe","Type":"ContainerDied","Data":"4186f52ac38d34bec86c4b23d24c511d84987ec76656cdaf97b0c90bf3b66e26"} Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.184808 4761 scope.go:117] "RemoveContainer" containerID="5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.184921 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.188452 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8661e6f-7759-475f-8964-bae1b8cfebbe-kube-api-access-k5bw5" (OuterVolumeSpecName: "kube-api-access-k5bw5") pod "d8661e6f-7759-475f-8964-bae1b8cfebbe" (UID: "d8661e6f-7759-475f-8964-bae1b8cfebbe"). InnerVolumeSpecName "kube-api-access-k5bw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.201542 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-config-data" (OuterVolumeSpecName: "config-data") pod "d8661e6f-7759-475f-8964-bae1b8cfebbe" (UID: "d8661e6f-7759-475f-8964-bae1b8cfebbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.212915 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8661e6f-7759-475f-8964-bae1b8cfebbe" (UID: "d8661e6f-7759-475f-8964-bae1b8cfebbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.219309 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.219286899 podStartE2EDuration="2.219286899s" podCreationTimestamp="2026-03-07 08:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:33.199314021 +0000 UTC m=+1590.108480496" watchObservedRunningTime="2026-03-07 08:15:33.219286899 +0000 UTC m=+1590.128453374" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.247010 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d8661e6f-7759-475f-8964-bae1b8cfebbe" (UID: "d8661e6f-7759-475f-8964-bae1b8cfebbe"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.249672 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs\") pod \"d8661e6f-7759-475f-8964-bae1b8cfebbe\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.250629 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8661e6f-7759-475f-8964-bae1b8cfebbe-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.250648 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.250659 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.250672 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5bw5\" (UniqueName: \"kubernetes.io/projected/d8661e6f-7759-475f-8964-bae1b8cfebbe-kube-api-access-k5bw5\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:33 crc kubenswrapper[4761]: W0307 08:15:33.256859 4761 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d8661e6f-7759-475f-8964-bae1b8cfebbe/volumes/kubernetes.io~secret/nova-metadata-tls-certs Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.256897 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d8661e6f-7759-475f-8964-bae1b8cfebbe" (UID: "d8661e6f-7759-475f-8964-bae1b8cfebbe"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.282241 4761 scope.go:117] "RemoveContainer" containerID="f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.313278 4761 scope.go:117] "RemoveContainer" containerID="5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49" Mar 07 08:15:33 crc kubenswrapper[4761]: E0307 08:15:33.313928 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49\": container with ID starting with 5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49 not found: ID does not exist" containerID="5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.313967 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49"} err="failed to get container status \"5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49\": rpc error: code = NotFound desc = could not find container \"5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49\": container with ID starting with 5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49 not found: ID does not exist" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.313993 4761 scope.go:117] "RemoveContainer" containerID="f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110" Mar 07 08:15:33 crc kubenswrapper[4761]: E0307 08:15:33.314534 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110\": container with ID starting with f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110 not found: ID does not exist" containerID="f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.314558 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110"} err="failed to get container status \"f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110\": rpc error: code = NotFound desc = could not find container \"f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110\": container with ID starting with f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110 not found: ID does not exist" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.353164 4761 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.524862 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.538646 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.556624 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:15:33 crc kubenswrapper[4761]: E0307 08:15:33.557284 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-metadata" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.557311 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-metadata" Mar 07 08:15:33 crc kubenswrapper[4761]: E0307 08:15:33.557329 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-log" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.557338 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-log" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.557632 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-log" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.557669 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-metadata" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.559162 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.561955 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.562048 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.568668 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.659308 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8tfx\" (UniqueName: \"kubernetes.io/projected/34c23fbf-c0a4-4b0e-bc41-e23eab413801-kube-api-access-x8tfx\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.659589 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34c23fbf-c0a4-4b0e-bc41-e23eab413801-logs\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.659631 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.659999 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-config-data\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.660074 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.721420 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" path="/var/lib/kubelet/pods/d8661e6f-7759-475f-8964-bae1b8cfebbe/volumes" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.762659 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-config-data\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.762747 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.762821 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8tfx\" (UniqueName: \"kubernetes.io/projected/34c23fbf-c0a4-4b0e-bc41-e23eab413801-kube-api-access-x8tfx\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.762945 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34c23fbf-c0a4-4b0e-bc41-e23eab413801-logs\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.762971 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.763986 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34c23fbf-c0a4-4b0e-bc41-e23eab413801-logs\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.768429 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.769378 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.770370 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-config-data\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.783983 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8tfx\" (UniqueName: \"kubernetes.io/projected/34c23fbf-c0a4-4b0e-bc41-e23eab413801-kube-api-access-x8tfx\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.878351 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.223158 4761 generic.go:334] "Generic (PLEG): container finished" podID="83e95e07-cc49-4e75-a0e9-0299705fc32a" containerID="73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a" exitCode=0 Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.223536 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83e95e07-cc49-4e75-a0e9-0299705fc32a","Type":"ContainerDied","Data":"73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a"} Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.301076 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.374765 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-combined-ca-bundle\") pod \"83e95e07-cc49-4e75-a0e9-0299705fc32a\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.375074 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-config-data\") pod \"83e95e07-cc49-4e75-a0e9-0299705fc32a\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.375255 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bthnz\" (UniqueName: \"kubernetes.io/projected/83e95e07-cc49-4e75-a0e9-0299705fc32a-kube-api-access-bthnz\") pod \"83e95e07-cc49-4e75-a0e9-0299705fc32a\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.380568 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e95e07-cc49-4e75-a0e9-0299705fc32a-kube-api-access-bthnz" (OuterVolumeSpecName: "kube-api-access-bthnz") pod "83e95e07-cc49-4e75-a0e9-0299705fc32a" (UID: "83e95e07-cc49-4e75-a0e9-0299705fc32a"). InnerVolumeSpecName "kube-api-access-bthnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.408224 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-config-data" (OuterVolumeSpecName: "config-data") pod "83e95e07-cc49-4e75-a0e9-0299705fc32a" (UID: "83e95e07-cc49-4e75-a0e9-0299705fc32a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.424782 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83e95e07-cc49-4e75-a0e9-0299705fc32a" (UID: "83e95e07-cc49-4e75-a0e9-0299705fc32a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:34 crc kubenswrapper[4761]: W0307 08:15:34.437757 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34c23fbf_c0a4_4b0e_bc41_e23eab413801.slice/crio-1dd3c9d55485cb857b8c8088f8561aae15284919a3afc9b985ad40bda53bd8e3 WatchSource:0}: Error finding container 1dd3c9d55485cb857b8c8088f8561aae15284919a3afc9b985ad40bda53bd8e3: Status 404 returned error can't find the container with id 1dd3c9d55485cb857b8c8088f8561aae15284919a3afc9b985ad40bda53bd8e3 Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.441733 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.477829 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bthnz\" (UniqueName: \"kubernetes.io/projected/83e95e07-cc49-4e75-a0e9-0299705fc32a-kube-api-access-bthnz\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.478148 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.478248 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.235631 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83e95e07-cc49-4e75-a0e9-0299705fc32a","Type":"ContainerDied","Data":"c57f522bb33dbd2d3cf1cf8e0cf8793d2336fe8f26897174337fdee177604cb6"} Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.235911 4761 scope.go:117] "RemoveContainer" containerID="73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.235655 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.245510 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34c23fbf-c0a4-4b0e-bc41-e23eab413801","Type":"ContainerStarted","Data":"893adad1f6413180cecd16b7af2327f1d577bbbc62fdf2650f256fdf46aa201a"} Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.245558 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34c23fbf-c0a4-4b0e-bc41-e23eab413801","Type":"ContainerStarted","Data":"8dba3a7a301570f83a058ddbb8f4f1d5bc48aeabd27e7ea2daf2991766edd2e7"} Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.245571 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34c23fbf-c0a4-4b0e-bc41-e23eab413801","Type":"ContainerStarted","Data":"1dd3c9d55485cb857b8c8088f8561aae15284919a3afc9b985ad40bda53bd8e3"} Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.269624 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.2696064590000002 podStartE2EDuration="2.269606459s" podCreationTimestamp="2026-03-07 08:15:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:35.260064031 +0000 UTC m=+1592.169230516" watchObservedRunningTime="2026-03-07 08:15:35.269606459 +0000 UTC m=+1592.178772934" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.289445 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.301905 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.328497 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:15:35 crc kubenswrapper[4761]: E0307 08:15:35.329084 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e95e07-cc49-4e75-a0e9-0299705fc32a" containerName="nova-scheduler-scheduler" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.329106 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e95e07-cc49-4e75-a0e9-0299705fc32a" containerName="nova-scheduler-scheduler" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.329350 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e95e07-cc49-4e75-a0e9-0299705fc32a" containerName="nova-scheduler-scheduler" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.330173 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.332029 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.345664 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.397116 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv4qp\" (UniqueName: \"kubernetes.io/projected/6517c184-4de2-40f1-a808-90030b11e0a9-kube-api-access-mv4qp\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.397224 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6517c184-4de2-40f1-a808-90030b11e0a9-config-data\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.397301 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6517c184-4de2-40f1-a808-90030b11e0a9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.499110 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv4qp\" (UniqueName: \"kubernetes.io/projected/6517c184-4de2-40f1-a808-90030b11e0a9-kube-api-access-mv4qp\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.499225 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6517c184-4de2-40f1-a808-90030b11e0a9-config-data\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.499367 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6517c184-4de2-40f1-a808-90030b11e0a9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.505095 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6517c184-4de2-40f1-a808-90030b11e0a9-config-data\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.505815 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6517c184-4de2-40f1-a808-90030b11e0a9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.515330 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv4qp\" (UniqueName: \"kubernetes.io/projected/6517c184-4de2-40f1-a808-90030b11e0a9-kube-api-access-mv4qp\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.645294 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.721164 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e95e07-cc49-4e75-a0e9-0299705fc32a" path="/var/lib/kubelet/pods/83e95e07-cc49-4e75-a0e9-0299705fc32a/volumes" Mar 07 08:15:36 crc kubenswrapper[4761]: I0307 08:15:36.153247 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:15:36 crc kubenswrapper[4761]: W0307 08:15:36.154903 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6517c184_4de2_40f1_a808_90030b11e0a9.slice/crio-3b69ae57f2bdc4e361dbd61656a32eed3caa52b808af82da8654a21217d43179 WatchSource:0}: Error finding container 3b69ae57f2bdc4e361dbd61656a32eed3caa52b808af82da8654a21217d43179: Status 404 returned error can't find the container with id 3b69ae57f2bdc4e361dbd61656a32eed3caa52b808af82da8654a21217d43179 Mar 07 08:15:36 crc kubenswrapper[4761]: I0307 08:15:36.262204 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6517c184-4de2-40f1-a808-90030b11e0a9","Type":"ContainerStarted","Data":"3b69ae57f2bdc4e361dbd61656a32eed3caa52b808af82da8654a21217d43179"} Mar 07 08:15:37 crc kubenswrapper[4761]: I0307 08:15:37.277159 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6517c184-4de2-40f1-a808-90030b11e0a9","Type":"ContainerStarted","Data":"545fad8e90dff5b251690d2c7b56df5575389f7ac80e54322bfb6e85d81af931"} Mar 07 08:15:37 crc kubenswrapper[4761]: I0307 08:15:37.296278 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.296262081 podStartE2EDuration="2.296262081s" podCreationTimestamp="2026-03-07 08:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:37.293789209 +0000 UTC m=+1594.202955694" watchObservedRunningTime="2026-03-07 08:15:37.296262081 +0000 UTC m=+1594.205428556" Mar 07 08:15:38 crc kubenswrapper[4761]: I0307 08:15:38.879034 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 08:15:38 crc kubenswrapper[4761]: I0307 08:15:38.879439 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 08:15:38 crc kubenswrapper[4761]: E0307 08:15:38.920461 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f20b55e_e643_4c84_8929_dccc23092137.slice/crio-24ce24b7ae154c50bbadc9b227eb94ce0080c6f5a420a8791dcddd59fe83f5fc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f20b55e_e643_4c84_8929_dccc23092137.slice/crio-c15a8f34b90748d4123aa8305d977da985ad4ee833bd6258f7893d25a0f01981.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f20b55e_e643_4c84_8929_dccc23092137.slice/crio-conmon-24ce24b7ae154c50bbadc9b227eb94ce0080c6f5a420a8791dcddd59fe83f5fc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f20b55e_e643_4c84_8929_dccc23092137.slice/crio-conmon-c15a8f34b90748d4123aa8305d977da985ad4ee833bd6258f7893d25a0f01981.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.298050 4761 generic.go:334] "Generic (PLEG): container finished" podID="5f20b55e-e643-4c84-8929-dccc23092137" containerID="24ce24b7ae154c50bbadc9b227eb94ce0080c6f5a420a8791dcddd59fe83f5fc" exitCode=137 Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.298371 4761 generic.go:334] "Generic (PLEG): container finished" podID="5f20b55e-e643-4c84-8929-dccc23092137" containerID="c15a8f34b90748d4123aa8305d977da985ad4ee833bd6258f7893d25a0f01981" exitCode=137 Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.298152 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerDied","Data":"24ce24b7ae154c50bbadc9b227eb94ce0080c6f5a420a8791dcddd59fe83f5fc"} Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.298412 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerDied","Data":"c15a8f34b90748d4123aa8305d977da985ad4ee833bd6258f7893d25a0f01981"} Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.298428 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerDied","Data":"85e34ca45a813a41b0ef29847196fda16d966d169d96229373dd4870bf277c21"} Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.298437 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85e34ca45a813a41b0ef29847196fda16d966d169d96229373dd4870bf277c21" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.323864 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.431923 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-combined-ca-bundle\") pod \"5f20b55e-e643-4c84-8929-dccc23092137\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.431975 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-scripts\") pod \"5f20b55e-e643-4c84-8929-dccc23092137\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.432169 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-997bx\" (UniqueName: \"kubernetes.io/projected/5f20b55e-e643-4c84-8929-dccc23092137-kube-api-access-997bx\") pod \"5f20b55e-e643-4c84-8929-dccc23092137\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.432237 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-config-data\") pod \"5f20b55e-e643-4c84-8929-dccc23092137\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.440585 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-scripts" (OuterVolumeSpecName: "scripts") pod "5f20b55e-e643-4c84-8929-dccc23092137" (UID: "5f20b55e-e643-4c84-8929-dccc23092137"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.440655 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f20b55e-e643-4c84-8929-dccc23092137-kube-api-access-997bx" (OuterVolumeSpecName: "kube-api-access-997bx") pod "5f20b55e-e643-4c84-8929-dccc23092137" (UID: "5f20b55e-e643-4c84-8929-dccc23092137"). InnerVolumeSpecName "kube-api-access-997bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.534983 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.535323 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-997bx\" (UniqueName: \"kubernetes.io/projected/5f20b55e-e643-4c84-8929-dccc23092137-kube-api-access-997bx\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.579158 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-config-data" (OuterVolumeSpecName: "config-data") pod "5f20b55e-e643-4c84-8929-dccc23092137" (UID: "5f20b55e-e643-4c84-8929-dccc23092137"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.593668 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f20b55e-e643-4c84-8929-dccc23092137" (UID: "5f20b55e-e643-4c84-8929-dccc23092137"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.638444 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.638475 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.310338 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.341661 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.355536 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.368202 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:40 crc kubenswrapper[4761]: E0307 08:15:40.368901 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-notifier" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.368926 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-notifier" Mar 07 08:15:40 crc kubenswrapper[4761]: E0307 08:15:40.368963 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-listener" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.368972 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-listener" Mar 07 08:15:40 crc kubenswrapper[4761]: E0307 08:15:40.368989 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-api" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.368997 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-api" Mar 07 08:15:40 crc kubenswrapper[4761]: E0307 08:15:40.369011 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-evaluator" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.369019 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-evaluator" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.369326 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-api" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.369359 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-notifier" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.369377 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-listener" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.369404 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-evaluator" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.372098 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.387432 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.387509 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-wcdfq" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.387617 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.387676 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.387745 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.393295 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.489054 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-config-data\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.492610 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddsfc\" (UniqueName: \"kubernetes.io/projected/887264dd-6715-4050-a798-9a88572bab63-kube-api-access-ddsfc\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.492749 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-public-tls-certs\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.492946 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-scripts\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.493285 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-combined-ca-bundle\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.493560 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-internal-tls-certs\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.595657 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-internal-tls-certs\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.595770 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-config-data\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.595798 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddsfc\" (UniqueName: \"kubernetes.io/projected/887264dd-6715-4050-a798-9a88572bab63-kube-api-access-ddsfc\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.595828 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-public-tls-certs\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.595904 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-scripts\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.596497 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-combined-ca-bundle\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.600824 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-config-data\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.602204 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-combined-ca-bundle\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.607559 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-internal-tls-certs\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.608698 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-scripts\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.609087 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-public-tls-certs\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.619156 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddsfc\" (UniqueName: \"kubernetes.io/projected/887264dd-6715-4050-a798-9a88572bab63-kube-api-access-ddsfc\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.645654 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.704573 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:15:41 crc kubenswrapper[4761]: I0307 08:15:41.197393 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:41 crc kubenswrapper[4761]: W0307 08:15:41.198632 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod887264dd_6715_4050_a798_9a88572bab63.slice/crio-f8d557621f70be05e00a45ad085f517dd06d59ecf5a8c7716d6f59d81155a216 WatchSource:0}: Error finding container f8d557621f70be05e00a45ad085f517dd06d59ecf5a8c7716d6f59d81155a216: Status 404 returned error can't find the container with id f8d557621f70be05e00a45ad085f517dd06d59ecf5a8c7716d6f59d81155a216 Mar 07 08:15:41 crc kubenswrapper[4761]: I0307 08:15:41.325228 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerStarted","Data":"f8d557621f70be05e00a45ad085f517dd06d59ecf5a8c7716d6f59d81155a216"} Mar 07 08:15:41 crc kubenswrapper[4761]: I0307 08:15:41.703098 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 08:15:41 crc kubenswrapper[4761]: I0307 08:15:41.703147 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 08:15:41 crc kubenswrapper[4761]: I0307 08:15:41.720964 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f20b55e-e643-4c84-8929-dccc23092137" path="/var/lib/kubelet/pods/5f20b55e-e643-4c84-8929-dccc23092137/volumes" Mar 07 08:15:42 crc kubenswrapper[4761]: I0307 08:15:42.338327 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerStarted","Data":"9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45"} Mar 07 08:15:42 crc kubenswrapper[4761]: I0307 08:15:42.711025 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c12aff9a-a09d-4da9-8a3d-d59591060f22" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.13:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 08:15:42 crc kubenswrapper[4761]: I0307 08:15:42.718905 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c12aff9a-a09d-4da9-8a3d-d59591060f22" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.13:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 08:15:43 crc kubenswrapper[4761]: I0307 08:15:43.352611 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerStarted","Data":"f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862"} Mar 07 08:15:43 crc kubenswrapper[4761]: I0307 08:15:43.352954 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerStarted","Data":"a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0"} Mar 07 08:15:43 crc kubenswrapper[4761]: E0307 08:15:43.553783 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:43 crc kubenswrapper[4761]: I0307 08:15:43.879084 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 07 08:15:43 crc kubenswrapper[4761]: I0307 08:15:43.879131 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 07 08:15:44 crc kubenswrapper[4761]: I0307 08:15:44.369335 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerStarted","Data":"234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96"} Mar 07 08:15:44 crc kubenswrapper[4761]: I0307 08:15:44.472065 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.761162139 podStartE2EDuration="4.472044926s" podCreationTimestamp="2026-03-07 08:15:40 +0000 UTC" firstStartedPulling="2026-03-07 08:15:41.202572706 +0000 UTC m=+1598.111739181" lastFinishedPulling="2026-03-07 08:15:43.913455473 +0000 UTC m=+1600.822621968" observedRunningTime="2026-03-07 08:15:44.391792564 +0000 UTC m=+1601.300959039" watchObservedRunningTime="2026-03-07 08:15:44.472044926 +0000 UTC m=+1601.381211401" Mar 07 08:15:44 crc kubenswrapper[4761]: I0307 08:15:44.880007 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="34c23fbf-c0a4-4b0e-bc41-e23eab413801" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.14:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 08:15:44 crc kubenswrapper[4761]: I0307 08:15:44.886924 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="34c23fbf-c0a4-4b0e-bc41-e23eab413801" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.14:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 08:15:45 crc kubenswrapper[4761]: I0307 08:15:45.645820 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 07 08:15:45 crc kubenswrapper[4761]: I0307 08:15:45.682165 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 07 08:15:46 crc kubenswrapper[4761]: I0307 08:15:46.428139 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 07 08:15:48 crc kubenswrapper[4761]: E0307 08:15:48.249844 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:48 crc kubenswrapper[4761]: E0307 08:15:48.250132 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:48 crc kubenswrapper[4761]: I0307 08:15:48.395645 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 07 08:15:49 crc kubenswrapper[4761]: E0307 08:15:49.003666 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:51 crc kubenswrapper[4761]: I0307 08:15:51.719890 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 07 08:15:51 crc kubenswrapper[4761]: I0307 08:15:51.720418 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 07 08:15:51 crc kubenswrapper[4761]: I0307 08:15:51.720962 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 07 08:15:51 crc kubenswrapper[4761]: I0307 08:15:51.720994 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 07 08:15:51 crc kubenswrapper[4761]: I0307 08:15:51.730563 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 07 08:15:51 crc kubenswrapper[4761]: I0307 08:15:51.733182 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 07 08:15:52 crc kubenswrapper[4761]: I0307 08:15:52.784750 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:15:52 crc kubenswrapper[4761]: I0307 08:15:52.785228 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="813224b8-8c59-4153-b642-5ee9da95777d" containerName="kube-state-metrics" containerID="cri-o://10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1" gracePeriod=30 Mar 07 08:15:52 crc kubenswrapper[4761]: I0307 08:15:52.964664 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:15:52 crc kubenswrapper[4761]: I0307 08:15:52.965004 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="43e38c78-3b46-4182-bae7-aa8c4d9b909b" containerName="mysqld-exporter" containerID="cri-o://790c4ccb2b2bb73e6a2faf2a7ff889dee3ae87ca4c2382aa000143aa0c34cafb" gracePeriod=30 Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.486656 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.511630 4761 generic.go:334] "Generic (PLEG): container finished" podID="43e38c78-3b46-4182-bae7-aa8c4d9b909b" containerID="790c4ccb2b2bb73e6a2faf2a7ff889dee3ae87ca4c2382aa000143aa0c34cafb" exitCode=2 Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.511925 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"43e38c78-3b46-4182-bae7-aa8c4d9b909b","Type":"ContainerDied","Data":"790c4ccb2b2bb73e6a2faf2a7ff889dee3ae87ca4c2382aa000143aa0c34cafb"} Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.535259 4761 generic.go:334] "Generic (PLEG): container finished" podID="813224b8-8c59-4153-b642-5ee9da95777d" containerID="10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1" exitCode=2 Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.540214 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.540891 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"813224b8-8c59-4153-b642-5ee9da95777d","Type":"ContainerDied","Data":"10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1"} Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.540926 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"813224b8-8c59-4153-b642-5ee9da95777d","Type":"ContainerDied","Data":"989e755014017208d03dbc74013c0dbc3eb2d3cb892edef48a2df938485c63cc"} Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.540943 4761 scope.go:117] "RemoveContainer" containerID="10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.571426 4761 scope.go:117] "RemoveContainer" containerID="10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1" Mar 07 08:15:53 crc kubenswrapper[4761]: E0307 08:15:53.572467 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1\": container with ID starting with 10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1 not found: ID does not exist" containerID="10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.572500 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1"} err="failed to get container status \"10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1\": rpc error: code = NotFound desc = could not find container \"10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1\": container with ID starting with 10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1 not found: ID does not exist" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.624589 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhtf2\" (UniqueName: \"kubernetes.io/projected/813224b8-8c59-4153-b642-5ee9da95777d-kube-api-access-bhtf2\") pod \"813224b8-8c59-4153-b642-5ee9da95777d\" (UID: \"813224b8-8c59-4153-b642-5ee9da95777d\") " Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.634243 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/813224b8-8c59-4153-b642-5ee9da95777d-kube-api-access-bhtf2" (OuterVolumeSpecName: "kube-api-access-bhtf2") pod "813224b8-8c59-4153-b642-5ee9da95777d" (UID: "813224b8-8c59-4153-b642-5ee9da95777d"). InnerVolumeSpecName "kube-api-access-bhtf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.651429 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.726117 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-combined-ca-bundle\") pod \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.726490 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzvdk\" (UniqueName: \"kubernetes.io/projected/43e38c78-3b46-4182-bae7-aa8c4d9b909b-kube-api-access-zzvdk\") pod \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.727143 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-config-data\") pod \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.728174 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhtf2\" (UniqueName: \"kubernetes.io/projected/813224b8-8c59-4153-b642-5ee9da95777d-kube-api-access-bhtf2\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.739696 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e38c78-3b46-4182-bae7-aa8c4d9b909b-kube-api-access-zzvdk" (OuterVolumeSpecName: "kube-api-access-zzvdk") pod "43e38c78-3b46-4182-bae7-aa8c4d9b909b" (UID: "43e38c78-3b46-4182-bae7-aa8c4d9b909b"). InnerVolumeSpecName "kube-api-access-zzvdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.761894 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43e38c78-3b46-4182-bae7-aa8c4d9b909b" (UID: "43e38c78-3b46-4182-bae7-aa8c4d9b909b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.821212 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-config-data" (OuterVolumeSpecName: "config-data") pod "43e38c78-3b46-4182-bae7-aa8c4d9b909b" (UID: "43e38c78-3b46-4182-bae7-aa8c4d9b909b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.830859 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.830881 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzvdk\" (UniqueName: \"kubernetes.io/projected/43e38c78-3b46-4182-bae7-aa8c4d9b909b-kube-api-access-zzvdk\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.831016 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.876266 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.888115 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.899077 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:15:53 crc kubenswrapper[4761]: E0307 08:15:53.899575 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e38c78-3b46-4182-bae7-aa8c4d9b909b" containerName="mysqld-exporter" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.899592 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e38c78-3b46-4182-bae7-aa8c4d9b909b" containerName="mysqld-exporter" Mar 07 08:15:53 crc kubenswrapper[4761]: E0307 08:15:53.899609 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="813224b8-8c59-4153-b642-5ee9da95777d" containerName="kube-state-metrics" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.899616 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="813224b8-8c59-4153-b642-5ee9da95777d" containerName="kube-state-metrics" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.899850 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="43e38c78-3b46-4182-bae7-aa8c4d9b909b" containerName="mysqld-exporter" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.899883 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="813224b8-8c59-4153-b642-5ee9da95777d" containerName="kube-state-metrics" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.900674 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.905095 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.905669 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.951266 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.035248 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr84h\" (UniqueName: \"kubernetes.io/projected/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-api-access-pr84h\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.035366 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.035390 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.035487 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.071972 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.094452 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.094567 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.137320 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.137435 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr84h\" (UniqueName: \"kubernetes.io/projected/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-api-access-pr84h\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.137525 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.137548 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.142761 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.147252 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.148096 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.164032 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr84h\" (UniqueName: \"kubernetes.io/projected/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-api-access-pr84h\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.288192 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.553104 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.553112 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"43e38c78-3b46-4182-bae7-aa8c4d9b909b","Type":"ContainerDied","Data":"e7e2fc64d0a795eee67dac1c574da8a7568d40cfe8d5bd6830d080270a74b5b0"} Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.553556 4761 scope.go:117] "RemoveContainer" containerID="790c4ccb2b2bb73e6a2faf2a7ff889dee3ae87ca4c2382aa000143aa0c34cafb" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.581474 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.604304 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.631072 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.660876 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.673577 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.677157 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.678054 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.728405 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.771527 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.771586 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.771620 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/6feb98fd-961e-4495-9ff4-8bafdd080e31-kube-api-access-c7bzz\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.771643 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-config-data\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.801027 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.874305 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.874381 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.874423 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/6feb98fd-961e-4495-9ff4-8bafdd080e31-kube-api-access-c7bzz\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.874451 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-config-data\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.880662 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.881954 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.882419 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-config-data\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.894661 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/6feb98fd-961e-4495-9ff4-8bafdd080e31-kube-api-access-c7bzz\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.014940 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.271391 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.272356 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="sg-core" containerID="cri-o://7bec2a6b9f93d88e6175882138e628f39fffb24361734adf393186dbc436254e" gracePeriod=30 Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.272438 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="proxy-httpd" containerID="cri-o://c9c45ba443109a6b8904801f4882aa5d094d9ac1223032ff87930bb525a6b320" gracePeriod=30 Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.272483 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-notification-agent" containerID="cri-o://65b25fb5e9d89c0452608d4caa0eda40d046ec312401e66e3f11d7503dbeb516" gracePeriod=30 Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.271900 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-central-agent" containerID="cri-o://42ec41a29efcc250aea778b070c1f73c664bcc94b85a43a129c7768c52da4fad" gracePeriod=30 Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.537430 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:15:55 crc kubenswrapper[4761]: W0307 08:15:55.551726 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6feb98fd_961e_4495_9ff4_8bafdd080e31.slice/crio-7e1e9b0b1f1eec005841ee709ce51a60717c4db4f65fef23231480248b1f0013 WatchSource:0}: Error finding container 7e1e9b0b1f1eec005841ee709ce51a60717c4db4f65fef23231480248b1f0013: Status 404 returned error can't find the container with id 7e1e9b0b1f1eec005841ee709ce51a60717c4db4f65fef23231480248b1f0013 Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.586061 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"6feb98fd-961e-4495-9ff4-8bafdd080e31","Type":"ContainerStarted","Data":"7e1e9b0b1f1eec005841ee709ce51a60717c4db4f65fef23231480248b1f0013"} Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.589837 4761 generic.go:334] "Generic (PLEG): container finished" podID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerID="c9c45ba443109a6b8904801f4882aa5d094d9ac1223032ff87930bb525a6b320" exitCode=0 Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.589870 4761 generic.go:334] "Generic (PLEG): container finished" podID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerID="7bec2a6b9f93d88e6175882138e628f39fffb24361734adf393186dbc436254e" exitCode=2 Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.589901 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerDied","Data":"c9c45ba443109a6b8904801f4882aa5d094d9ac1223032ff87930bb525a6b320"} Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.589941 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerDied","Data":"7bec2a6b9f93d88e6175882138e628f39fffb24361734adf393186dbc436254e"} Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.596429 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ed86dd3e-17e0-467b-8243-8209a04dcbe1","Type":"ContainerStarted","Data":"48e06af2d45382b9f79e3e3836b7b75f59045a850e12d53cec8de2b3b534f21b"} Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.596470 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ed86dd3e-17e0-467b-8243-8209a04dcbe1","Type":"ContainerStarted","Data":"22a17986a44cccb2a75e649c9da7a6b466df1c555c1dc3fdf72c1fa953e275bc"} Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.596509 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.623938 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.196374825 podStartE2EDuration="2.62391911s" podCreationTimestamp="2026-03-07 08:15:53 +0000 UTC" firstStartedPulling="2026-03-07 08:15:54.801897317 +0000 UTC m=+1611.711063792" lastFinishedPulling="2026-03-07 08:15:55.229441602 +0000 UTC m=+1612.138608077" observedRunningTime="2026-03-07 08:15:55.612378132 +0000 UTC m=+1612.521544617" watchObservedRunningTime="2026-03-07 08:15:55.62391911 +0000 UTC m=+1612.533085585" Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.722361 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43e38c78-3b46-4182-bae7-aa8c4d9b909b" path="/var/lib/kubelet/pods/43e38c78-3b46-4182-bae7-aa8c4d9b909b/volumes" Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.723132 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="813224b8-8c59-4153-b642-5ee9da95777d" path="/var/lib/kubelet/pods/813224b8-8c59-4153-b642-5ee9da95777d/volumes" Mar 07 08:15:56 crc kubenswrapper[4761]: I0307 08:15:56.610604 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"6feb98fd-961e-4495-9ff4-8bafdd080e31","Type":"ContainerStarted","Data":"13a05ec3200d2094bdaba10b7f7630e6ba61f0552c5a68a1e2716fef9a9532b4"} Mar 07 08:15:56 crc kubenswrapper[4761]: I0307 08:15:56.614937 4761 generic.go:334] "Generic (PLEG): container finished" podID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerID="42ec41a29efcc250aea778b070c1f73c664bcc94b85a43a129c7768c52da4fad" exitCode=0 Mar 07 08:15:56 crc kubenswrapper[4761]: I0307 08:15:56.614976 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerDied","Data":"42ec41a29efcc250aea778b070c1f73c664bcc94b85a43a129c7768c52da4fad"} Mar 07 08:15:56 crc kubenswrapper[4761]: I0307 08:15:56.630493 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.107480293 podStartE2EDuration="2.630474797s" podCreationTimestamp="2026-03-07 08:15:54 +0000 UTC" firstStartedPulling="2026-03-07 08:15:55.55612644 +0000 UTC m=+1612.465292925" lastFinishedPulling="2026-03-07 08:15:56.079120954 +0000 UTC m=+1612.988287429" observedRunningTime="2026-03-07 08:15:56.626937539 +0000 UTC m=+1613.536104014" watchObservedRunningTime="2026-03-07 08:15:56.630474797 +0000 UTC m=+1613.539641262" Mar 07 08:15:57 crc kubenswrapper[4761]: I0307 08:15:57.632155 4761 generic.go:334] "Generic (PLEG): container finished" podID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerID="65b25fb5e9d89c0452608d4caa0eda40d046ec312401e66e3f11d7503dbeb516" exitCode=0 Mar 07 08:15:57 crc kubenswrapper[4761]: I0307 08:15:57.633421 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerDied","Data":"65b25fb5e9d89c0452608d4caa0eda40d046ec312401e66e3f11d7503dbeb516"} Mar 07 08:15:57 crc kubenswrapper[4761]: I0307 08:15:57.908216 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.069792 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-sg-core-conf-yaml\") pod \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.070420 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-log-httpd\") pod \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.070528 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-combined-ca-bundle\") pod \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.070580 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-config-data\") pod \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.070633 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76z9f\" (UniqueName: \"kubernetes.io/projected/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-kube-api-access-76z9f\") pod \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.070731 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-scripts\") pod \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.070765 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-run-httpd\") pod \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.072261 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "27e0eb4e-cf40-4edc-aa40-d90412b78ad7" (UID: "27e0eb4e-cf40-4edc-aa40-d90412b78ad7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.077357 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "27e0eb4e-cf40-4edc-aa40-d90412b78ad7" (UID: "27e0eb4e-cf40-4edc-aa40-d90412b78ad7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.086915 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-scripts" (OuterVolumeSpecName: "scripts") pod "27e0eb4e-cf40-4edc-aa40-d90412b78ad7" (UID: "27e0eb4e-cf40-4edc-aa40-d90412b78ad7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.118945 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-kube-api-access-76z9f" (OuterVolumeSpecName: "kube-api-access-76z9f") pod "27e0eb4e-cf40-4edc-aa40-d90412b78ad7" (UID: "27e0eb4e-cf40-4edc-aa40-d90412b78ad7"). InnerVolumeSpecName "kube-api-access-76z9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.178412 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.178703 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76z9f\" (UniqueName: \"kubernetes.io/projected/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-kube-api-access-76z9f\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.178727 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.178736 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.198375 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "27e0eb4e-cf40-4edc-aa40-d90412b78ad7" (UID: "27e0eb4e-cf40-4edc-aa40-d90412b78ad7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.253920 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27e0eb4e-cf40-4edc-aa40-d90412b78ad7" (UID: "27e0eb4e-cf40-4edc-aa40-d90412b78ad7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.282831 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.282876 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.286344 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-config-data" (OuterVolumeSpecName: "config-data") pod "27e0eb4e-cf40-4edc-aa40-d90412b78ad7" (UID: "27e0eb4e-cf40-4edc-aa40-d90412b78ad7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.384508 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.652590 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerDied","Data":"2d73a155caa3071bba582126bc91455e8983f20c114de33421479dadadcdca21"} Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.652640 4761 scope.go:117] "RemoveContainer" containerID="c9c45ba443109a6b8904801f4882aa5d094d9ac1223032ff87930bb525a6b320" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.652665 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.695024 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.702042 4761 scope.go:117] "RemoveContainer" containerID="7bec2a6b9f93d88e6175882138e628f39fffb24361734adf393186dbc436254e" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.708051 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.730484 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:58 crc kubenswrapper[4761]: E0307 08:15:58.731033 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="sg-core" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.731052 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="sg-core" Mar 07 08:15:58 crc kubenswrapper[4761]: E0307 08:15:58.731069 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-central-agent" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.731075 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-central-agent" Mar 07 08:15:58 crc kubenswrapper[4761]: E0307 08:15:58.731112 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="proxy-httpd" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.731118 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="proxy-httpd" Mar 07 08:15:58 crc kubenswrapper[4761]: E0307 08:15:58.731132 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-notification-agent" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.731138 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-notification-agent" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.733270 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="proxy-httpd" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.733302 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-central-agent" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.733326 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="sg-core" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.733338 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-notification-agent" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.737553 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.740890 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.741092 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.741316 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.762003 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.789680 4761 scope.go:117] "RemoveContainer" containerID="65b25fb5e9d89c0452608d4caa0eda40d046ec312401e66e3f11d7503dbeb516" Mar 07 08:15:58 crc kubenswrapper[4761]: E0307 08:15:58.822412 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.866142 4761 scope.go:117] "RemoveContainer" containerID="42ec41a29efcc250aea778b070c1f73c664bcc94b85a43a129c7768c52da4fad" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.893536 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-run-httpd\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.894416 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.894604 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.894801 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-log-httpd\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.895120 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82v59\" (UniqueName: \"kubernetes.io/projected/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-kube-api-access-82v59\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.895246 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-scripts\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.895357 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-config-data\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.895511 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997417 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-run-httpd\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997503 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997541 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997584 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-log-httpd\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997677 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82v59\" (UniqueName: \"kubernetes.io/projected/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-kube-api-access-82v59\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997706 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-scripts\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-config-data\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997882 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.998385 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-run-httpd\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.998616 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-log-httpd\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.001999 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-scripts\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.002214 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.002686 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.003849 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.006339 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-config-data\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.017348 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82v59\" (UniqueName: \"kubernetes.io/projected/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-kube-api-access-82v59\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: E0307 08:15:59.056516 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.067833 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.547102 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.667895 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerStarted","Data":"34896ffbbf74f44b14c712613aa10174155f897f28b13e7abcaec728564a98f8"} Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.719578 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" path="/var/lib/kubelet/pods/27e0eb4e-cf40-4edc-aa40-d90412b78ad7/volumes" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.175139 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547856-zvszx"] Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.179503 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547856-zvszx" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.187824 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.192901 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.192924 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.199952 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547856-zvszx"] Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.225923 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsbfm\" (UniqueName: \"kubernetes.io/projected/d9136161-bf41-4d51-8873-1862fc46f1ea-kube-api-access-tsbfm\") pod \"auto-csr-approver-29547856-zvszx\" (UID: \"d9136161-bf41-4d51-8873-1862fc46f1ea\") " pod="openshift-infra/auto-csr-approver-29547856-zvszx" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.328461 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsbfm\" (UniqueName: \"kubernetes.io/projected/d9136161-bf41-4d51-8873-1862fc46f1ea-kube-api-access-tsbfm\") pod \"auto-csr-approver-29547856-zvszx\" (UID: \"d9136161-bf41-4d51-8873-1862fc46f1ea\") " pod="openshift-infra/auto-csr-approver-29547856-zvszx" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.350661 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsbfm\" (UniqueName: \"kubernetes.io/projected/d9136161-bf41-4d51-8873-1862fc46f1ea-kube-api-access-tsbfm\") pod \"auto-csr-approver-29547856-zvszx\" (UID: \"d9136161-bf41-4d51-8873-1862fc46f1ea\") " pod="openshift-infra/auto-csr-approver-29547856-zvszx" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.512129 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547856-zvszx" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.685302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerStarted","Data":"d376489038c4ccb7f00aef3539237578093f1457f6e64018ec840ed437e42319"} Mar 07 08:16:01 crc kubenswrapper[4761]: I0307 08:16:01.013623 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547856-zvszx"] Mar 07 08:16:01 crc kubenswrapper[4761]: W0307 08:16:01.016040 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9136161_bf41_4d51_8873_1862fc46f1ea.slice/crio-4b45fe3d6fcc6a884096284929f5891f782c8820bd9a85af707958e6f4098d16 WatchSource:0}: Error finding container 4b45fe3d6fcc6a884096284929f5891f782c8820bd9a85af707958e6f4098d16: Status 404 returned error can't find the container with id 4b45fe3d6fcc6a884096284929f5891f782c8820bd9a85af707958e6f4098d16 Mar 07 08:16:01 crc kubenswrapper[4761]: I0307 08:16:01.699045 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerStarted","Data":"36f8598b61bc3ed2a6a6a0981f0562b344c4139bd8288ab5f69cfa08c4a9cbf6"} Mar 07 08:16:01 crc kubenswrapper[4761]: I0307 08:16:01.700642 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547856-zvszx" event={"ID":"d9136161-bf41-4d51-8873-1862fc46f1ea","Type":"ContainerStarted","Data":"4b45fe3d6fcc6a884096284929f5891f782c8820bd9a85af707958e6f4098d16"} Mar 07 08:16:02 crc kubenswrapper[4761]: I0307 08:16:02.766277 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547856-zvszx" event={"ID":"d9136161-bf41-4d51-8873-1862fc46f1ea","Type":"ContainerStarted","Data":"2dd284c471d3dab40868b7f4a2f639ee7f217f8244cd3b21fbf7065bef24cb93"} Mar 07 08:16:02 crc kubenswrapper[4761]: I0307 08:16:02.770587 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerStarted","Data":"ec657063ed3702df6bc015d81438f5457328c6560f0575fd6cd8b2b872832a1e"} Mar 07 08:16:02 crc kubenswrapper[4761]: I0307 08:16:02.800435 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547856-zvszx" podStartSLOduration=1.9567505600000001 podStartE2EDuration="2.800402883s" podCreationTimestamp="2026-03-07 08:16:00 +0000 UTC" firstStartedPulling="2026-03-07 08:16:01.018797355 +0000 UTC m=+1617.927963850" lastFinishedPulling="2026-03-07 08:16:01.862449688 +0000 UTC m=+1618.771616173" observedRunningTime="2026-03-07 08:16:02.784363193 +0000 UTC m=+1619.693529668" watchObservedRunningTime="2026-03-07 08:16:02.800402883 +0000 UTC m=+1619.709569398" Mar 07 08:16:03 crc kubenswrapper[4761]: I0307 08:16:03.796504 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerStarted","Data":"1b116038dda4e380159781b521072c45b5692624f648eb7a92c12570548c9ed1"} Mar 07 08:16:03 crc kubenswrapper[4761]: I0307 08:16:03.797144 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:16:03 crc kubenswrapper[4761]: I0307 08:16:03.826245 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.19192302 podStartE2EDuration="5.82622424s" podCreationTimestamp="2026-03-07 08:15:58 +0000 UTC" firstStartedPulling="2026-03-07 08:15:59.553159027 +0000 UTC m=+1616.462325502" lastFinishedPulling="2026-03-07 08:16:03.187460247 +0000 UTC m=+1620.096626722" observedRunningTime="2026-03-07 08:16:03.8141975 +0000 UTC m=+1620.723363975" watchObservedRunningTime="2026-03-07 08:16:03.82622424 +0000 UTC m=+1620.735390715" Mar 07 08:16:04 crc kubenswrapper[4761]: I0307 08:16:04.303912 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 07 08:16:04 crc kubenswrapper[4761]: I0307 08:16:04.806574 4761 generic.go:334] "Generic (PLEG): container finished" podID="d9136161-bf41-4d51-8873-1862fc46f1ea" containerID="2dd284c471d3dab40868b7f4a2f639ee7f217f8244cd3b21fbf7065bef24cb93" exitCode=0 Mar 07 08:16:04 crc kubenswrapper[4761]: I0307 08:16:04.806653 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547856-zvszx" event={"ID":"d9136161-bf41-4d51-8873-1862fc46f1ea","Type":"ContainerDied","Data":"2dd284c471d3dab40868b7f4a2f639ee7f217f8244cd3b21fbf7065bef24cb93"} Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.241142 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547856-zvszx" Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.299422 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsbfm\" (UniqueName: \"kubernetes.io/projected/d9136161-bf41-4d51-8873-1862fc46f1ea-kube-api-access-tsbfm\") pod \"d9136161-bf41-4d51-8873-1862fc46f1ea\" (UID: \"d9136161-bf41-4d51-8873-1862fc46f1ea\") " Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.308949 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9136161-bf41-4d51-8873-1862fc46f1ea-kube-api-access-tsbfm" (OuterVolumeSpecName: "kube-api-access-tsbfm") pod "d9136161-bf41-4d51-8873-1862fc46f1ea" (UID: "d9136161-bf41-4d51-8873-1862fc46f1ea"). InnerVolumeSpecName "kube-api-access-tsbfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.404445 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsbfm\" (UniqueName: \"kubernetes.io/projected/d9136161-bf41-4d51-8873-1862fc46f1ea-kube-api-access-tsbfm\") on node \"crc\" DevicePath \"\"" Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.829414 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547856-zvszx" event={"ID":"d9136161-bf41-4d51-8873-1862fc46f1ea","Type":"ContainerDied","Data":"4b45fe3d6fcc6a884096284929f5891f782c8820bd9a85af707958e6f4098d16"} Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.829467 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b45fe3d6fcc6a884096284929f5891f782c8820bd9a85af707958e6f4098d16" Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.829530 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547856-zvszx" Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.886782 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547850-g6d9p"] Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.897545 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547850-g6d9p"] Mar 07 08:16:07 crc kubenswrapper[4761]: I0307 08:16:07.718417 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f1c6039-d723-41f6-a7a2-42f53281a5fa" path="/var/lib/kubelet/pods/3f1c6039-d723-41f6-a7a2-42f53281a5fa/volumes" Mar 07 08:16:29 crc kubenswrapper[4761]: I0307 08:16:29.079427 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.203912 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-92qzx"] Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.219459 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-92qzx"] Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.308804 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-bhq7g"] Mar 07 08:16:40 crc kubenswrapper[4761]: E0307 08:16:40.309581 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9136161-bf41-4d51-8873-1862fc46f1ea" containerName="oc" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.309597 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9136161-bf41-4d51-8873-1862fc46f1ea" containerName="oc" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.309803 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9136161-bf41-4d51-8873-1862fc46f1ea" containerName="oc" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.310623 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.320520 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-bhq7g"] Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.399446 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjrnt\" (UniqueName: \"kubernetes.io/projected/7f02c4d0-220b-4761-a494-7a054eef8672-kube-api-access-qjrnt\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.399503 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-combined-ca-bundle\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.399806 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-config-data\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.501673 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjrnt\" (UniqueName: \"kubernetes.io/projected/7f02c4d0-220b-4761-a494-7a054eef8672-kube-api-access-qjrnt\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.501725 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-combined-ca-bundle\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.501801 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-config-data\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.507228 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-combined-ca-bundle\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.521513 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-config-data\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.521616 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjrnt\" (UniqueName: \"kubernetes.io/projected/7f02c4d0-220b-4761-a494-7a054eef8672-kube-api-access-qjrnt\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.635197 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:41 crc kubenswrapper[4761]: W0307 08:16:41.142371 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f02c4d0_220b_4761_a494_7a054eef8672.slice/crio-6bcb7a4eb62fb26c45bb7aca3cf5cc559db3473a1fe9e0879f24bf81dcd894e7 WatchSource:0}: Error finding container 6bcb7a4eb62fb26c45bb7aca3cf5cc559db3473a1fe9e0879f24bf81dcd894e7: Status 404 returned error can't find the container with id 6bcb7a4eb62fb26c45bb7aca3cf5cc559db3473a1fe9e0879f24bf81dcd894e7 Mar 07 08:16:41 crc kubenswrapper[4761]: I0307 08:16:41.143004 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-bhq7g"] Mar 07 08:16:41 crc kubenswrapper[4761]: I0307 08:16:41.263563 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bhq7g" event={"ID":"7f02c4d0-220b-4761-a494-7a054eef8672","Type":"ContainerStarted","Data":"6bcb7a4eb62fb26c45bb7aca3cf5cc559db3473a1fe9e0879f24bf81dcd894e7"} Mar 07 08:16:41 crc kubenswrapper[4761]: I0307 08:16:41.737321 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dce2c706-6c24-4be8-b347-90448de8aaf9" path="/var/lib/kubelet/pods/dce2c706-6c24-4be8-b347-90448de8aaf9/volumes" Mar 07 08:16:42 crc kubenswrapper[4761]: I0307 08:16:42.227853 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.051806 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.248931 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.249266 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-central-agent" containerID="cri-o://d376489038c4ccb7f00aef3539237578093f1457f6e64018ec840ed437e42319" gracePeriod=30 Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.253290 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="proxy-httpd" containerID="cri-o://1b116038dda4e380159781b521072c45b5692624f648eb7a92c12570548c9ed1" gracePeriod=30 Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.253333 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-notification-agent" containerID="cri-o://36f8598b61bc3ed2a6a6a0981f0562b344c4139bd8288ab5f69cfa08c4a9cbf6" gracePeriod=30 Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.253405 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="sg-core" containerID="cri-o://ec657063ed3702df6bc015d81438f5457328c6560f0575fd6cd8b2b872832a1e" gracePeriod=30 Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.942605 4761 generic.go:334] "Generic (PLEG): container finished" podID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerID="1b116038dda4e380159781b521072c45b5692624f648eb7a92c12570548c9ed1" exitCode=0 Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.942945 4761 generic.go:334] "Generic (PLEG): container finished" podID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerID="ec657063ed3702df6bc015d81438f5457328c6560f0575fd6cd8b2b872832a1e" exitCode=2 Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.942968 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerDied","Data":"1b116038dda4e380159781b521072c45b5692624f648eb7a92c12570548c9ed1"} Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.942995 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerDied","Data":"ec657063ed3702df6bc015d81438f5457328c6560f0575fd6cd8b2b872832a1e"} Mar 07 08:16:45 crc kubenswrapper[4761]: I0307 08:16:45.957385 4761 generic.go:334] "Generic (PLEG): container finished" podID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerID="d376489038c4ccb7f00aef3539237578093f1457f6e64018ec840ed437e42319" exitCode=0 Mar 07 08:16:45 crc kubenswrapper[4761]: I0307 08:16:45.957917 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerDied","Data":"d376489038c4ccb7f00aef3539237578093f1457f6e64018ec840ed437e42319"} Mar 07 08:16:48 crc kubenswrapper[4761]: I0307 08:16:48.318535 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerName="rabbitmq" containerID="cri-o://29cb38754c06ba4cf8ad902c0d21b151c7ca626800f06ecaaa2ef264e60c35b1" gracePeriod=604794 Mar 07 08:16:49 crc kubenswrapper[4761]: I0307 08:16:49.448852 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerName="rabbitmq" containerID="cri-o://818287b0f8f3f1d44f2a907bb97c9168062fe658aaa3193d97412871fa4ab3f8" gracePeriod=604795 Mar 07 08:16:52 crc kubenswrapper[4761]: I0307 08:16:52.026818 4761 generic.go:334] "Generic (PLEG): container finished" podID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerID="36f8598b61bc3ed2a6a6a0981f0562b344c4139bd8288ab5f69cfa08c4a9cbf6" exitCode=0 Mar 07 08:16:52 crc kubenswrapper[4761]: I0307 08:16:52.027302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerDied","Data":"36f8598b61bc3ed2a6a6a0981f0562b344c4139bd8288ab5f69cfa08c4a9cbf6"} Mar 07 08:16:55 crc kubenswrapper[4761]: I0307 08:16:55.071618 4761 generic.go:334] "Generic (PLEG): container finished" podID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerID="29cb38754c06ba4cf8ad902c0d21b151c7ca626800f06ecaaa2ef264e60c35b1" exitCode=0 Mar 07 08:16:55 crc kubenswrapper[4761]: I0307 08:16:55.071706 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7201e0b2-1f44-45f0-b746-b98f8cb01f8f","Type":"ContainerDied","Data":"29cb38754c06ba4cf8ad902c0d21b151c7ca626800f06ecaaa2ef264e60c35b1"} Mar 07 08:16:55 crc kubenswrapper[4761]: I0307 08:16:55.083865 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Mar 07 08:16:55 crc kubenswrapper[4761]: I0307 08:16:55.225864 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Mar 07 08:16:56 crc kubenswrapper[4761]: I0307 08:16:56.090499 4761 generic.go:334] "Generic (PLEG): container finished" podID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerID="818287b0f8f3f1d44f2a907bb97c9168062fe658aaa3193d97412871fa4ab3f8" exitCode=0 Mar 07 08:16:56 crc kubenswrapper[4761]: I0307 08:16:56.090636 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc2f3dec-2838-4d30-93c2-631da252cdb7","Type":"ContainerDied","Data":"818287b0f8f3f1d44f2a907bb97c9168062fe658aaa3193d97412871fa4ab3f8"} Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.359842 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.396513 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.460688 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-st2wq"] Mar 07 08:17:01 crc kubenswrapper[4761]: E0307 08:17:01.461198 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-notification-agent" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461213 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-notification-agent" Mar 07 08:17:01 crc kubenswrapper[4761]: E0307 08:17:01.461253 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerName="setup-container" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461261 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerName="setup-container" Mar 07 08:17:01 crc kubenswrapper[4761]: E0307 08:17:01.461277 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerName="rabbitmq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461287 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerName="rabbitmq" Mar 07 08:17:01 crc kubenswrapper[4761]: E0307 08:17:01.461303 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="sg-core" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461311 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="sg-core" Mar 07 08:17:01 crc kubenswrapper[4761]: E0307 08:17:01.461342 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-central-agent" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461348 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-central-agent" Mar 07 08:17:01 crc kubenswrapper[4761]: E0307 08:17:01.461357 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="proxy-httpd" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461363 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="proxy-httpd" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461572 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerName="rabbitmq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461588 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="sg-core" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461601 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-central-agent" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461615 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="proxy-httpd" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461628 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-notification-agent" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.482618 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.504266 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.508872 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-st2wq"] Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528459 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-erlang-cookie-secret\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528506 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-plugins-conf\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528536 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82v59\" (UniqueName: \"kubernetes.io/projected/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-kube-api-access-82v59\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528563 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krzrn\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-kube-api-access-krzrn\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528621 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-erlang-cookie\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528660 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-ceilometer-tls-certs\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528686 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-config-data\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528705 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-pod-info\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528778 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-run-httpd\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.529589 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.530409 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531220 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531334 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-server-conf\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531384 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-combined-ca-bundle\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531433 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-log-httpd\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531521 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-scripts\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531543 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-tls\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531559 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-plugins\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531658 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-confd\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531695 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-config-data\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531729 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-sg-core-conf-yaml\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.532307 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.532749 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.532770 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.532783 4761 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.532940 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.546114 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.550287 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-scripts" (OuterVolumeSpecName: "scripts") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.550745 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-pod-info" (OuterVolumeSpecName: "pod-info") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.550784 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-kube-api-access-82v59" (OuterVolumeSpecName: "kube-api-access-82v59") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "kube-api-access-82v59". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.564018 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.575684 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.578463 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-kube-api-access-krzrn" (OuterVolumeSpecName: "kube-api-access-krzrn") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "kube-api-access-krzrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.637598 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.641405 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-config\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.641485 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.641690 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.641742 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrvwp\" (UniqueName: \"kubernetes.io/projected/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-kube-api-access-zrvwp\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.641875 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642243 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642645 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642663 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642693 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642703 4761 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642734 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82v59\" (UniqueName: \"kubernetes.io/projected/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-kube-api-access-82v59\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642747 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krzrn\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-kube-api-access-krzrn\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642756 4761 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-pod-info\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642765 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.675099 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.733189 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.736768 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519" (OuterVolumeSpecName: "persistence") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745218 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745263 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrvwp\" (UniqueName: \"kubernetes.io/projected/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-kube-api-access-zrvwp\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745326 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745365 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745433 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745491 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-config\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745513 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745617 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745628 4761 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745648 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") on node \"crc\" " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.750371 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-config-data" (OuterVolumeSpecName: "config-data") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.751171 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-config\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.751414 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.751535 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.751813 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.751874 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.752100 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.774571 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrvwp\" (UniqueName: \"kubernetes.io/projected/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-kube-api-access-zrvwp\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.784793 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.792480 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-server-conf" (OuterVolumeSpecName: "server-conf") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.815158 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.820523 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519") on node "crc" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.847589 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.847616 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.847628 4761 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-server-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.867732 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-config-data" (OuterVolumeSpecName: "config-data") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.883096 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.883420 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.949993 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.950794 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.951122 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.115277 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:02 crc kubenswrapper[4761]: E0307 08:17:02.151192 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Mar 07 08:17:02 crc kubenswrapper[4761]: E0307 08:17:02.151253 4761 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Mar 07 08:17:02 crc kubenswrapper[4761]: E0307 08:17:02.151392 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qjrnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-bhq7g_openstack(7f02c4d0-220b-4761-a494-7a054eef8672): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:17:02 crc kubenswrapper[4761]: E0307 08:17:02.152764 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-bhq7g" podUID="7f02c4d0-220b-4761-a494-7a054eef8672" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.255324 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc2f3dec-2838-4d30-93c2-631da252cdb7","Type":"ContainerDied","Data":"f4fb3120122a5372512b2b348c9d0b61b0cb91030e2f3a5d057787a248ed6391"} Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.258802 4761 scope.go:117] "RemoveContainer" containerID="818287b0f8f3f1d44f2a907bb97c9168062fe658aaa3193d97412871fa4ab3f8" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.259020 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263130 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263214 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263265 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-plugins\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263292 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-config-data\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263410 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjr25\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-kube-api-access-gjr25\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263427 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-erlang-cookie\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263502 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2f3dec-2838-4d30-93c2-631da252cdb7-erlang-cookie-secret\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263552 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-plugins-conf\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263578 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-tls\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263620 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-confd\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263707 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2f3dec-2838-4d30-93c2-631da252cdb7-pod-info\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.267560 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.268027 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.269330 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7201e0b2-1f44-45f0-b746-b98f8cb01f8f","Type":"ContainerDied","Data":"8ff2eb14f63926a2787b9edf0a4314c17464aa3f349344a0ae0be7df60f72ec1"} Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.269954 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.270551 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.273655 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/bc2f3dec-2838-4d30-93c2-631da252cdb7-pod-info" (OuterVolumeSpecName: "pod-info") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.274097 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.274362 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2f3dec-2838-4d30-93c2-631da252cdb7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.283130 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerDied","Data":"34896ffbbf74f44b14c712613aa10174155f897f28b13e7abcaec728564a98f8"} Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.283214 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.316165 4761 scope.go:117] "RemoveContainer" containerID="89a6b5588731808b0bfe82c5f4e9ce1720f8b54e7fe66d37411578cd9536d97b" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.325024 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134" (OuterVolumeSpecName: "persistence") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.336773 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-kube-api-access-gjr25" (OuterVolumeSpecName: "kube-api-access-gjr25") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "kube-api-access-gjr25". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.349942 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-config-data" (OuterVolumeSpecName: "config-data") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.369294 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf" (OuterVolumeSpecName: "server-conf") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: W0307 08:17:02.376359 4761 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bc2f3dec-2838-4d30-93c2-631da252cdb7/volumes/kubernetes.io~configmap/server-conf Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.377131 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf" (OuterVolumeSpecName: "server-conf") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.377292 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379734 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379772 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379786 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjr25\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-kube-api-access-gjr25\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379800 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379812 4761 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2f3dec-2838-4d30-93c2-631da252cdb7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379822 4761 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379833 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379843 4761 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2f3dec-2838-4d30-93c2-631da252cdb7-pod-info\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379919 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") on node \"crc\" " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379935 4761 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.427152 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.427515 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134") on node "crc" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.483513 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.507274 4761 scope.go:117] "RemoveContainer" containerID="29cb38754c06ba4cf8ad902c0d21b151c7ca626800f06ecaaa2ef264e60c35b1" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.545927 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.577635 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.587503 4761 scope.go:117] "RemoveContainer" containerID="1e506ba29675507705351ff4dddbabf2575095cb15dab3309deefdd45c364615" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.588052 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.594335 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.618390 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.652705 4761 scope.go:117] "RemoveContainer" containerID="1b116038dda4e380159781b521072c45b5692624f648eb7a92c12570548c9ed1" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.684385 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.725657 4761 scope.go:117] "RemoveContainer" containerID="ec657063ed3702df6bc015d81438f5457328c6560f0575fd6cd8b2b872832a1e" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.737065 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: E0307 08:17:02.738421 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerName="rabbitmq" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.738443 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerName="rabbitmq" Mar 07 08:17:02 crc kubenswrapper[4761]: E0307 08:17:02.738476 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerName="setup-container" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.738483 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerName="setup-container" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.739084 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerName="rabbitmq" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.745826 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.748087 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.748214 4761 scope.go:117] "RemoveContainer" containerID="36f8598b61bc3ed2a6a6a0981f0562b344c4139bd8288ab5f69cfa08c4a9cbf6" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.748484 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.749066 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.771924 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.774443 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.780012 4761 scope.go:117] "RemoveContainer" containerID="d376489038c4ccb7f00aef3539237578093f1457f6e64018ec840ed437e42319" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.787622 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.800745 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.814875 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.828516 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.840108 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.844033 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.847402 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.848313 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xhskz" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.848482 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.848660 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.848708 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.848917 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.850762 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.854421 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-st2wq"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.869035 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911017 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911073 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911105 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911160 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-config-data\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911179 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911201 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911231 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94l94\" (UniqueName: \"kubernetes.io/projected/2bdde810-6429-4553-a9bb-1ccef1f89e2d-kube-api-access-94l94\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911249 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-scripts\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911281 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bdde810-6429-4553-a9bb-1ccef1f89e2d-run-httpd\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911301 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc85h\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-kube-api-access-pc85h\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911332 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/894f6ffc-2563-49a6-913d-6b0b83a70fa3-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911354 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bdde810-6429-4553-a9bb-1ccef1f89e2d-log-httpd\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911415 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911536 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-config-data\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911611 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-server-conf\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911755 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/894f6ffc-2563-49a6-913d-6b0b83a70fa3-pod-info\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911890 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911968 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.912012 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.013497 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94l94\" (UniqueName: \"kubernetes.io/projected/2bdde810-6429-4553-a9bb-1ccef1f89e2d-kube-api-access-94l94\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014558 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-scripts\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014615 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014640 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bdde810-6429-4553-a9bb-1ccef1f89e2d-run-httpd\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014661 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014696 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc85h\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-kube-api-access-pc85h\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014757 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/894f6ffc-2563-49a6-913d-6b0b83a70fa3-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014795 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014824 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bdde810-6429-4553-a9bb-1ccef1f89e2d-log-httpd\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014851 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014887 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-config-data\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014909 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-server-conf\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014947 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014982 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015011 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/894f6ffc-2563-49a6-913d-6b0b83a70fa3-pod-info\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015215 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015253 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015276 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6l27\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-kube-api-access-f6l27\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015309 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015334 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015392 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015421 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015440 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015471 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015516 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015548 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-config-data\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015574 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015601 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015634 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015659 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.017486 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bdde810-6429-4553-a9bb-1ccef1f89e2d-log-httpd\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.017573 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bdde810-6429-4553-a9bb-1ccef1f89e2d-run-httpd\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.017605 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.018666 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-server-conf\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.019220 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.019301 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-config-data\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.019736 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.024978 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/894f6ffc-2563-49a6-913d-6b0b83a70fa3-pod-info\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.026640 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/894f6ffc-2563-49a6-913d-6b0b83a70fa3-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.026828 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-scripts\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.026876 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.026886 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.027244 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.027281 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/df547fdc21673de1cc702cfc619e77e1e5934613434f5da0c9db8a26fc9b248e/globalmount\"" pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.027331 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.027777 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-config-data\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.033769 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.041132 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc85h\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-kube-api-access-pc85h\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.041426 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94l94\" (UniqueName: \"kubernetes.io/projected/2bdde810-6429-4553-a9bb-1ccef1f89e2d-kube-api-access-94l94\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.049010 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.070344 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.125880 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.126040 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.126684 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.126732 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.126828 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.126868 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.126929 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.127004 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.127037 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.127137 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6l27\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-kube-api-access-f6l27\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.127188 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.127250 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.131982 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.126930 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.133451 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.133564 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.133710 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.134867 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.134983 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.137051 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.137102 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/860627d4bd50531ff33cb398731d7440ae9b5625a3c0a76764756dbab322d2ce/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.139379 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.152374 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6l27\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-kube-api-access-f6l27\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.166494 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.187212 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.302540 4761 generic.go:334] "Generic (PLEG): container finished" podID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerID="5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91" exitCode=0 Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.304463 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" event={"ID":"1d7c0bc7-4f05-4dce-b048-beb5e89946bc","Type":"ContainerDied","Data":"5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91"} Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.304521 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" event={"ID":"1d7c0bc7-4f05-4dce-b048-beb5e89946bc","Type":"ContainerStarted","Data":"4462c2642c836aa59b1bffae98ae4ad0b394e7d26a19751dd255ec2337fa1c50"} Mar 07 08:17:03 crc kubenswrapper[4761]: E0307 08:17:03.307559 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-bhq7g" podUID="7f02c4d0-220b-4761-a494-7a054eef8672" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.397558 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.485970 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.616359 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.738202 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" path="/var/lib/kubelet/pods/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7/volumes" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.742228 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" path="/var/lib/kubelet/pods/7201e0b2-1f44-45f0-b746-b98f8cb01f8f/volumes" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.744462 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" path="/var/lib/kubelet/pods/bc2f3dec-2838-4d30-93c2-631da252cdb7/volumes" Mar 07 08:17:03 crc kubenswrapper[4761]: W0307 08:17:03.968859 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod894f6ffc_2563_49a6_913d_6b0b83a70fa3.slice/crio-b0f429b3232d079287632d677ce7c6d5653fa75cefda7f6aea8ae607d4708cb5 WatchSource:0}: Error finding container b0f429b3232d079287632d677ce7c6d5653fa75cefda7f6aea8ae607d4708cb5: Status 404 returned error can't find the container with id b0f429b3232d079287632d677ce7c6d5653fa75cefda7f6aea8ae607d4708cb5 Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.986121 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:17:04 crc kubenswrapper[4761]: I0307 08:17:04.123453 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:17:04 crc kubenswrapper[4761]: W0307 08:17:04.127989 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee9f03ce_b3a6_440c_8b34_16c66dac3e00.slice/crio-1345e464c783eadebc2091911176236f57d868c14f7bb74ebd8b0584e2535360 WatchSource:0}: Error finding container 1345e464c783eadebc2091911176236f57d868c14f7bb74ebd8b0584e2535360: Status 404 returned error can't find the container with id 1345e464c783eadebc2091911176236f57d868c14f7bb74ebd8b0584e2535360 Mar 07 08:17:04 crc kubenswrapper[4761]: I0307 08:17:04.328322 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"894f6ffc-2563-49a6-913d-6b0b83a70fa3","Type":"ContainerStarted","Data":"b0f429b3232d079287632d677ce7c6d5653fa75cefda7f6aea8ae607d4708cb5"} Mar 07 08:17:04 crc kubenswrapper[4761]: I0307 08:17:04.331900 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" event={"ID":"1d7c0bc7-4f05-4dce-b048-beb5e89946bc","Type":"ContainerStarted","Data":"cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e"} Mar 07 08:17:04 crc kubenswrapper[4761]: I0307 08:17:04.333613 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:04 crc kubenswrapper[4761]: I0307 08:17:04.336694 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ee9f03ce-b3a6-440c-8b34-16c66dac3e00","Type":"ContainerStarted","Data":"1345e464c783eadebc2091911176236f57d868c14f7bb74ebd8b0584e2535360"} Mar 07 08:17:04 crc kubenswrapper[4761]: I0307 08:17:04.340099 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bdde810-6429-4553-a9bb-1ccef1f89e2d","Type":"ContainerStarted","Data":"27bf3a33a35bac82a5e47061da1475bd2c2394a38a3b33a7c22efcf38757ab84"} Mar 07 08:17:04 crc kubenswrapper[4761]: I0307 08:17:04.365643 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" podStartSLOduration=3.365625265 podStartE2EDuration="3.365625265s" podCreationTimestamp="2026-03-07 08:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:17:04.351145364 +0000 UTC m=+1681.260311839" watchObservedRunningTime="2026-03-07 08:17:04.365625265 +0000 UTC m=+1681.274791740" Mar 07 08:17:06 crc kubenswrapper[4761]: I0307 08:17:06.365175 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"894f6ffc-2563-49a6-913d-6b0b83a70fa3","Type":"ContainerStarted","Data":"3119aad3bad18654f546224b70b567df27d1eccea87fd252ee7a64a946448493"} Mar 07 08:17:06 crc kubenswrapper[4761]: I0307 08:17:06.369128 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ee9f03ce-b3a6-440c-8b34-16c66dac3e00","Type":"ContainerStarted","Data":"5f85b7d01b435ebc644a16d877cffc84b9915f3cfecbcc3859a2b12cf4d1027e"} Mar 07 08:17:08 crc kubenswrapper[4761]: I0307 08:17:08.391667 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bdde810-6429-4553-a9bb-1ccef1f89e2d","Type":"ContainerStarted","Data":"1443e56814c28961324049739b81f51a64652ab0da2dbb7afb348838a00f0e1f"} Mar 07 08:17:09 crc kubenswrapper[4761]: I0307 08:17:09.407459 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bdde810-6429-4553-a9bb-1ccef1f89e2d","Type":"ContainerStarted","Data":"0c08d34ec8720616c59f7580d688843a00bf55ebe7072212422af31faddefb3a"} Mar 07 08:17:10 crc kubenswrapper[4761]: I0307 08:17:10.420416 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bdde810-6429-4553-a9bb-1ccef1f89e2d","Type":"ContainerStarted","Data":"5b70ee293be49eecb485a2c771590e9ec00fc7af8a6547ae1d2c3cc94c00f78c"} Mar 07 08:17:11 crc kubenswrapper[4761]: I0307 08:17:11.788925 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:11 crc kubenswrapper[4761]: I0307 08:17:11.871189 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dl87j"] Mar 07 08:17:11 crc kubenswrapper[4761]: I0307 08:17:11.871424 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerName="dnsmasq-dns" containerID="cri-o://6b8d401dab7334c08e66ac3f5216b07310afe3106177b3008889e75b361dfdf4" gracePeriod=10 Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.055074 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-rjbxk"] Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.060984 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.079097 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-rjbxk"] Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.192520 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lvlt\" (UniqueName: \"kubernetes.io/projected/3322ce20-e09c-4b31-add3-d54b0a38fbae-kube-api-access-7lvlt\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.192601 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.192711 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.192787 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-config\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.192814 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.193086 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.193253 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.295200 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lvlt\" (UniqueName: \"kubernetes.io/projected/3322ce20-e09c-4b31-add3-d54b0a38fbae-kube-api-access-7lvlt\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.295267 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.295321 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.295363 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-config\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.295382 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.295442 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.295482 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.296381 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.297072 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.297206 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-config\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.297274 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.297478 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.297646 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.317380 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lvlt\" (UniqueName: \"kubernetes.io/projected/3322ce20-e09c-4b31-add3-d54b0a38fbae-kube-api-access-7lvlt\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.390428 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.439050 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.9:5353: connect: connection refused" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.444423 4761 generic.go:334] "Generic (PLEG): container finished" podID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerID="6b8d401dab7334c08e66ac3f5216b07310afe3106177b3008889e75b361dfdf4" exitCode=0 Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.444519 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" event={"ID":"17b567eb-878f-4cb2-9da6-7d04193f02e7","Type":"ContainerDied","Data":"6b8d401dab7334c08e66ac3f5216b07310afe3106177b3008889e75b361dfdf4"} Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.448995 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bdde810-6429-4553-a9bb-1ccef1f89e2d","Type":"ContainerStarted","Data":"7799630c88453dd7a9231642a07bace19fae321eea09bd99b8f312e7bd3f8969"} Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.450510 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.485362 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.481300521 podStartE2EDuration="10.485342916s" podCreationTimestamp="2026-03-07 08:17:02 +0000 UTC" firstStartedPulling="2026-03-07 08:17:03.633257308 +0000 UTC m=+1680.542423783" lastFinishedPulling="2026-03-07 08:17:11.637299703 +0000 UTC m=+1688.546466178" observedRunningTime="2026-03-07 08:17:12.471398748 +0000 UTC m=+1689.380565213" watchObservedRunningTime="2026-03-07 08:17:12.485342916 +0000 UTC m=+1689.394509391" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.085331 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-rjbxk"] Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.390284 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.486072 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" event={"ID":"17b567eb-878f-4cb2-9da6-7d04193f02e7","Type":"ContainerDied","Data":"8caa1dca21d992e48acf15843168d308bfc2d2443ea50cbda5239b58c25dbe0b"} Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.486121 4761 scope.go:117] "RemoveContainer" containerID="6b8d401dab7334c08e66ac3f5216b07310afe3106177b3008889e75b361dfdf4" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.486248 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.492019 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" event={"ID":"3322ce20-e09c-4b31-add3-d54b0a38fbae","Type":"ContainerStarted","Data":"d46ac3b8a573635ac6fcf8184e86dec0328dffea9d6247ff42913f3fba72a7ae"} Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.512988 4761 scope.go:117] "RemoveContainer" containerID="c9434e396ec8273fc0ff635acc03c308492a76b2cb653926f6ce7a0fb4bf25ef" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.543418 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-config\") pod \"17b567eb-878f-4cb2-9da6-7d04193f02e7\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.543469 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-nb\") pod \"17b567eb-878f-4cb2-9da6-7d04193f02e7\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.543503 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kbd2\" (UniqueName: \"kubernetes.io/projected/17b567eb-878f-4cb2-9da6-7d04193f02e7-kube-api-access-4kbd2\") pod \"17b567eb-878f-4cb2-9da6-7d04193f02e7\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.543538 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-swift-storage-0\") pod \"17b567eb-878f-4cb2-9da6-7d04193f02e7\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.543635 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-svc\") pod \"17b567eb-878f-4cb2-9da6-7d04193f02e7\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.543670 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-sb\") pod \"17b567eb-878f-4cb2-9da6-7d04193f02e7\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.557961 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b567eb-878f-4cb2-9da6-7d04193f02e7-kube-api-access-4kbd2" (OuterVolumeSpecName: "kube-api-access-4kbd2") pod "17b567eb-878f-4cb2-9da6-7d04193f02e7" (UID: "17b567eb-878f-4cb2-9da6-7d04193f02e7"). InnerVolumeSpecName "kube-api-access-4kbd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.617488 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "17b567eb-878f-4cb2-9da6-7d04193f02e7" (UID: "17b567eb-878f-4cb2-9da6-7d04193f02e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.617526 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-config" (OuterVolumeSpecName: "config") pod "17b567eb-878f-4cb2-9da6-7d04193f02e7" (UID: "17b567eb-878f-4cb2-9da6-7d04193f02e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.622364 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "17b567eb-878f-4cb2-9da6-7d04193f02e7" (UID: "17b567eb-878f-4cb2-9da6-7d04193f02e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.634518 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "17b567eb-878f-4cb2-9da6-7d04193f02e7" (UID: "17b567eb-878f-4cb2-9da6-7d04193f02e7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.637017 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "17b567eb-878f-4cb2-9da6-7d04193f02e7" (UID: "17b567eb-878f-4cb2-9da6-7d04193f02e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.646602 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.646625 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.646637 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kbd2\" (UniqueName: \"kubernetes.io/projected/17b567eb-878f-4cb2-9da6-7d04193f02e7-kube-api-access-4kbd2\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.646647 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.646657 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.646665 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.702828 4761 scope.go:117] "RemoveContainer" containerID="a3b1a9637d6c680134f028c0b657f4d1920c25e24ee30a7a62adf8d224b1cdc5" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.754759 4761 scope.go:117] "RemoveContainer" containerID="411930607eac514bd071597b40dd8906cabf45add842a076667a07a5d0a6cff5" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.839569 4761 scope.go:117] "RemoveContainer" containerID="4601975d730dbd935aa6c0dc81636d749aa74204df5d49980d3658c09cc61dfc" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.889056 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dl87j"] Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.902158 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dl87j"] Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.937235 4761 scope.go:117] "RemoveContainer" containerID="bf13a0b3293e1a4646cfabacb3571d4c17dab1d592a83ac1045bde8ab5526426" Mar 07 08:17:14 crc kubenswrapper[4761]: I0307 08:17:14.502800 4761 generic.go:334] "Generic (PLEG): container finished" podID="3322ce20-e09c-4b31-add3-d54b0a38fbae" containerID="acefc32c35e29401abe52e052c06837573740d6c9e2bdbf7f8998f2760846bb8" exitCode=0 Mar 07 08:17:14 crc kubenswrapper[4761]: I0307 08:17:14.502846 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" event={"ID":"3322ce20-e09c-4b31-add3-d54b0a38fbae","Type":"ContainerDied","Data":"acefc32c35e29401abe52e052c06837573740d6c9e2bdbf7f8998f2760846bb8"} Mar 07 08:17:15 crc kubenswrapper[4761]: I0307 08:17:15.524419 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" event={"ID":"3322ce20-e09c-4b31-add3-d54b0a38fbae","Type":"ContainerStarted","Data":"74f23b1eb505dc99e5a4f52a11f3d10d2e18ca5481ca19a6c5b7e3fadbe7e597"} Mar 07 08:17:15 crc kubenswrapper[4761]: I0307 08:17:15.526481 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:15 crc kubenswrapper[4761]: I0307 08:17:15.547499 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" podStartSLOduration=3.547476694 podStartE2EDuration="3.547476694s" podCreationTimestamp="2026-03-07 08:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:17:15.546542441 +0000 UTC m=+1692.455708916" watchObservedRunningTime="2026-03-07 08:17:15.547476694 +0000 UTC m=+1692.456643169" Mar 07 08:17:15 crc kubenswrapper[4761]: I0307 08:17:15.721951 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" path="/var/lib/kubelet/pods/17b567eb-878f-4cb2-9da6-7d04193f02e7/volumes" Mar 07 08:17:18 crc kubenswrapper[4761]: I0307 08:17:18.709336 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:17:19 crc kubenswrapper[4761]: I0307 08:17:19.572870 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bhq7g" event={"ID":"7f02c4d0-220b-4761-a494-7a054eef8672","Type":"ContainerStarted","Data":"548972e02784866505e9c24ffd4b574561fc0ad963d71d809b954ff28861a93e"} Mar 07 08:17:21 crc kubenswrapper[4761]: I0307 08:17:21.594099 4761 generic.go:334] "Generic (PLEG): container finished" podID="7f02c4d0-220b-4761-a494-7a054eef8672" containerID="548972e02784866505e9c24ffd4b574561fc0ad963d71d809b954ff28861a93e" exitCode=0 Mar 07 08:17:21 crc kubenswrapper[4761]: I0307 08:17:21.594187 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bhq7g" event={"ID":"7f02c4d0-220b-4761-a494-7a054eef8672","Type":"ContainerDied","Data":"548972e02784866505e9c24ffd4b574561fc0ad963d71d809b954ff28861a93e"} Mar 07 08:17:22 crc kubenswrapper[4761]: I0307 08:17:22.392961 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:22 crc kubenswrapper[4761]: I0307 08:17:22.490290 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-st2wq"] Mar 07 08:17:22 crc kubenswrapper[4761]: I0307 08:17:22.490629 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" podUID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerName="dnsmasq-dns" containerID="cri-o://cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e" gracePeriod=10 Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.259813 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bhq7g" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.266270 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324435 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrvwp\" (UniqueName: \"kubernetes.io/projected/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-kube-api-access-zrvwp\") pod \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324518 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-sb\") pod \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324542 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjrnt\" (UniqueName: \"kubernetes.io/projected/7f02c4d0-220b-4761-a494-7a054eef8672-kube-api-access-qjrnt\") pod \"7f02c4d0-220b-4761-a494-7a054eef8672\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324653 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-swift-storage-0\") pod \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324725 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-combined-ca-bundle\") pod \"7f02c4d0-220b-4761-a494-7a054eef8672\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324769 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-svc\") pod \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324806 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-nb\") pod \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324895 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-openstack-edpm-ipam\") pod \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324980 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-config\") pod \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.325009 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-config-data\") pod \"7f02c4d0-220b-4761-a494-7a054eef8672\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.344967 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-kube-api-access-zrvwp" (OuterVolumeSpecName: "kube-api-access-zrvwp") pod "1d7c0bc7-4f05-4dce-b048-beb5e89946bc" (UID: "1d7c0bc7-4f05-4dce-b048-beb5e89946bc"). InnerVolumeSpecName "kube-api-access-zrvwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.345036 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f02c4d0-220b-4761-a494-7a054eef8672-kube-api-access-qjrnt" (OuterVolumeSpecName: "kube-api-access-qjrnt") pod "7f02c4d0-220b-4761-a494-7a054eef8672" (UID: "7f02c4d0-220b-4761-a494-7a054eef8672"). InnerVolumeSpecName "kube-api-access-qjrnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.417768 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f02c4d0-220b-4761-a494-7a054eef8672" (UID: "7f02c4d0-220b-4761-a494-7a054eef8672"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.427821 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "1d7c0bc7-4f05-4dce-b048-beb5e89946bc" (UID: "1d7c0bc7-4f05-4dce-b048-beb5e89946bc"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.428855 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d7c0bc7-4f05-4dce-b048-beb5e89946bc" (UID: "1d7c0bc7-4f05-4dce-b048-beb5e89946bc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.429043 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.429086 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.429101 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.429114 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrvwp\" (UniqueName: \"kubernetes.io/projected/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-kube-api-access-zrvwp\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.429137 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjrnt\" (UniqueName: \"kubernetes.io/projected/7f02c4d0-220b-4761-a494-7a054eef8672-kube-api-access-qjrnt\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.442084 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d7c0bc7-4f05-4dce-b048-beb5e89946bc" (UID: "1d7c0bc7-4f05-4dce-b048-beb5e89946bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.442779 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d7c0bc7-4f05-4dce-b048-beb5e89946bc" (UID: "1d7c0bc7-4f05-4dce-b048-beb5e89946bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.444181 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-config" (OuterVolumeSpecName: "config") pod "1d7c0bc7-4f05-4dce-b048-beb5e89946bc" (UID: "1d7c0bc7-4f05-4dce-b048-beb5e89946bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.445371 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d7c0bc7-4f05-4dce-b048-beb5e89946bc" (UID: "1d7c0bc7-4f05-4dce-b048-beb5e89946bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.457554 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-config-data" (OuterVolumeSpecName: "config-data") pod "7f02c4d0-220b-4761-a494-7a054eef8672" (UID: "7f02c4d0-220b-4761-a494-7a054eef8672"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.532560 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.532597 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.532608 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.532618 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.532626 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.656017 4761 generic.go:334] "Generic (PLEG): container finished" podID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerID="cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e" exitCode=0 Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.656134 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" event={"ID":"1d7c0bc7-4f05-4dce-b048-beb5e89946bc","Type":"ContainerDied","Data":"cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e"} Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.656145 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.656160 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" event={"ID":"1d7c0bc7-4f05-4dce-b048-beb5e89946bc","Type":"ContainerDied","Data":"4462c2642c836aa59b1bffae98ae4ad0b394e7d26a19751dd255ec2337fa1c50"} Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.656176 4761 scope.go:117] "RemoveContainer" containerID="cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.660602 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bhq7g" event={"ID":"7f02c4d0-220b-4761-a494-7a054eef8672","Type":"ContainerDied","Data":"6bcb7a4eb62fb26c45bb7aca3cf5cc559db3473a1fe9e0879f24bf81dcd894e7"} Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.660636 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bcb7a4eb62fb26c45bb7aca3cf5cc559db3473a1fe9e0879f24bf81dcd894e7" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.660686 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bhq7g" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.800068 4761 scope.go:117] "RemoveContainer" containerID="5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.809500 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-st2wq"] Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.841330 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-st2wq"] Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.883282 4761 scope.go:117] "RemoveContainer" containerID="cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e" Mar 07 08:17:23 crc kubenswrapper[4761]: E0307 08:17:23.883611 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e\": container with ID starting with cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e not found: ID does not exist" containerID="cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.883644 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e"} err="failed to get container status \"cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e\": rpc error: code = NotFound desc = could not find container \"cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e\": container with ID starting with cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e not found: ID does not exist" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.883665 4761 scope.go:117] "RemoveContainer" containerID="5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91" Mar 07 08:17:23 crc kubenswrapper[4761]: E0307 08:17:23.884251 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91\": container with ID starting with 5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91 not found: ID does not exist" containerID="5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.884408 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91"} err="failed to get container status \"5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91\": rpc error: code = NotFound desc = could not find container \"5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91\": container with ID starting with 5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91 not found: ID does not exist" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.770792 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7764c87546-svl8g"] Mar 07 08:17:24 crc kubenswrapper[4761]: E0307 08:17:24.771694 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerName="init" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.771729 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerName="init" Mar 07 08:17:24 crc kubenswrapper[4761]: E0307 08:17:24.771758 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerName="dnsmasq-dns" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.771766 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerName="dnsmasq-dns" Mar 07 08:17:24 crc kubenswrapper[4761]: E0307 08:17:24.771780 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerName="dnsmasq-dns" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.771788 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerName="dnsmasq-dns" Mar 07 08:17:24 crc kubenswrapper[4761]: E0307 08:17:24.771816 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f02c4d0-220b-4761-a494-7a054eef8672" containerName="heat-db-sync" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.771824 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f02c4d0-220b-4761-a494-7a054eef8672" containerName="heat-db-sync" Mar 07 08:17:24 crc kubenswrapper[4761]: E0307 08:17:24.771836 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerName="init" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.771845 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerName="init" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.772127 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f02c4d0-220b-4761-a494-7a054eef8672" containerName="heat-db-sync" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.772146 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerName="dnsmasq-dns" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.772161 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerName="dnsmasq-dns" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.773169 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.785479 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7764c87546-svl8g"] Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.826901 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5d698bbbb-b4tpc"] Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.828994 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.854031 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7d497d755f-jwccr"] Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.855790 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865486 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-config-data-custom\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865535 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-public-tls-certs\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865574 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm6l7\" (UniqueName: \"kubernetes.io/projected/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-kube-api-access-cm6l7\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865612 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-config-data-custom\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865667 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-combined-ca-bundle\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865729 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-combined-ca-bundle\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865768 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg2vh\" (UniqueName: \"kubernetes.io/projected/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-kube-api-access-gg2vh\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865806 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-internal-tls-certs\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865841 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-config-data\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865867 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-config-data\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.888835 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7d497d755f-jwccr"] Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.899704 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5d698bbbb-b4tpc"] Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.968302 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-config-data-custom\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.968374 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-combined-ca-bundle\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.968396 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-config-data\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.968613 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-internal-tls-certs\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.968744 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-combined-ca-bundle\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.968877 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg2vh\" (UniqueName: \"kubernetes.io/projected/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-kube-api-access-gg2vh\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.968989 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-internal-tls-certs\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969041 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-public-tls-certs\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969107 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-config-data\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969158 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-config-data\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969221 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvbm4\" (UniqueName: \"kubernetes.io/projected/3336529a-b93c-46c9-844b-337e4ef49f98-kube-api-access-mvbm4\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969260 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-config-data-custom\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969322 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-public-tls-certs\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969407 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm6l7\" (UniqueName: \"kubernetes.io/projected/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-kube-api-access-cm6l7\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969463 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-combined-ca-bundle\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969507 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-config-data-custom\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.973259 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-combined-ca-bundle\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.973642 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-config-data-custom\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.974332 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-public-tls-certs\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.975323 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-combined-ca-bundle\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.975768 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-config-data\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.976051 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-config-data\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.976543 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-internal-tls-certs\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.979383 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-config-data-custom\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.992096 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm6l7\" (UniqueName: \"kubernetes.io/projected/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-kube-api-access-cm6l7\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.993099 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg2vh\" (UniqueName: \"kubernetes.io/projected/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-kube-api-access-gg2vh\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.072265 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-internal-tls-certs\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.072429 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-public-tls-certs\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.072502 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvbm4\" (UniqueName: \"kubernetes.io/projected/3336529a-b93c-46c9-844b-337e4ef49f98-kube-api-access-mvbm4\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.072600 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-combined-ca-bundle\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.072650 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-config-data-custom\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.072707 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-config-data\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.076927 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-config-data-custom\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.077376 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-config-data\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.077427 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-public-tls-certs\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.078098 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-combined-ca-bundle\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.078293 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-internal-tls-certs\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.092816 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvbm4\" (UniqueName: \"kubernetes.io/projected/3336529a-b93c-46c9-844b-337e4ef49f98-kube-api-access-mvbm4\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.148083 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.160794 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.175740 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.726047 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" path="/var/lib/kubelet/pods/1d7c0bc7-4f05-4dce-b048-beb5e89946bc/volumes" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.730612 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5d698bbbb-b4tpc"] Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.730656 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7764c87546-svl8g"] Mar 07 08:17:25 crc kubenswrapper[4761]: W0307 08:17:25.730748 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d7da3dc_9c5e_4a91_aa4a_e3677dda3e12.slice/crio-b32e34222851f483d2d2fe3d29fc17c9fe268def4abd70b33f48e25a0e4acf2b WatchSource:0}: Error finding container b32e34222851f483d2d2fe3d29fc17c9fe268def4abd70b33f48e25a0e4acf2b: Status 404 returned error can't find the container with id b32e34222851f483d2d2fe3d29fc17c9fe268def4abd70b33f48e25a0e4acf2b Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.969897 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7d497d755f-jwccr"] Mar 07 08:17:25 crc kubenswrapper[4761]: W0307 08:17:25.971891 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3336529a_b93c_46c9_844b_337e4ef49f98.slice/crio-463421f70b1965d1d65ef10f5864c6c3421391a7df39a5e7705643080e30c297 WatchSource:0}: Error finding container 463421f70b1965d1d65ef10f5864c6c3421391a7df39a5e7705643080e30c297: Status 404 returned error can't find the container with id 463421f70b1965d1d65ef10f5864c6c3421391a7df39a5e7705643080e30c297 Mar 07 08:17:26 crc kubenswrapper[4761]: I0307 08:17:26.749507 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d698bbbb-b4tpc" event={"ID":"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4","Type":"ContainerStarted","Data":"265669422d947c2df760e3e246f6f1b44f03b8bc249f3071d843e65b9fc05cad"} Mar 07 08:17:26 crc kubenswrapper[4761]: I0307 08:17:26.764904 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d497d755f-jwccr" event={"ID":"3336529a-b93c-46c9-844b-337e4ef49f98","Type":"ContainerStarted","Data":"463421f70b1965d1d65ef10f5864c6c3421391a7df39a5e7705643080e30c297"} Mar 07 08:17:26 crc kubenswrapper[4761]: I0307 08:17:26.811077 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7764c87546-svl8g" event={"ID":"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12","Type":"ContainerStarted","Data":"d789434594cee97dacff8f9bbd8916b65c7d8cb68c6521603e7725fbb88c6210"} Mar 07 08:17:26 crc kubenswrapper[4761]: I0307 08:17:26.811118 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7764c87546-svl8g" event={"ID":"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12","Type":"ContainerStarted","Data":"b32e34222851f483d2d2fe3d29fc17c9fe268def4abd70b33f48e25a0e4acf2b"} Mar 07 08:17:26 crc kubenswrapper[4761]: I0307 08:17:26.812420 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:26 crc kubenswrapper[4761]: I0307 08:17:26.840108 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7764c87546-svl8g" podStartSLOduration=2.840088686 podStartE2EDuration="2.840088686s" podCreationTimestamp="2026-03-07 08:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:17:26.836901676 +0000 UTC m=+1703.746068151" watchObservedRunningTime="2026-03-07 08:17:26.840088686 +0000 UTC m=+1703.749255161" Mar 07 08:17:28 crc kubenswrapper[4761]: I0307 08:17:28.849156 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d698bbbb-b4tpc" event={"ID":"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4","Type":"ContainerStarted","Data":"644d5686b6966b1f70b85bd69c594fe0d185429e935d82a649dffd66c0485331"} Mar 07 08:17:28 crc kubenswrapper[4761]: I0307 08:17:28.851462 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:28 crc kubenswrapper[4761]: I0307 08:17:28.855786 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d497d755f-jwccr" event={"ID":"3336529a-b93c-46c9-844b-337e4ef49f98","Type":"ContainerStarted","Data":"e12997a07bb10bc5d717b8209b6a61650c617f3bb5c322ade80dd48bfea5c3e9"} Mar 07 08:17:28 crc kubenswrapper[4761]: I0307 08:17:28.856263 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:28 crc kubenswrapper[4761]: I0307 08:17:28.875897 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5d698bbbb-b4tpc" podStartSLOduration=2.996594668 podStartE2EDuration="4.875877664s" podCreationTimestamp="2026-03-07 08:17:24 +0000 UTC" firstStartedPulling="2026-03-07 08:17:25.729109284 +0000 UTC m=+1702.638275759" lastFinishedPulling="2026-03-07 08:17:27.60839228 +0000 UTC m=+1704.517558755" observedRunningTime="2026-03-07 08:17:28.873885044 +0000 UTC m=+1705.783051529" watchObservedRunningTime="2026-03-07 08:17:28.875877664 +0000 UTC m=+1705.785044139" Mar 07 08:17:28 crc kubenswrapper[4761]: I0307 08:17:28.900939 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7d497d755f-jwccr" podStartSLOduration=3.263176207 podStartE2EDuration="4.900901628s" podCreationTimestamp="2026-03-07 08:17:24 +0000 UTC" firstStartedPulling="2026-03-07 08:17:25.973314995 +0000 UTC m=+1702.882481470" lastFinishedPulling="2026-03-07 08:17:27.611040416 +0000 UTC m=+1704.520206891" observedRunningTime="2026-03-07 08:17:28.900386795 +0000 UTC m=+1705.809553290" watchObservedRunningTime="2026-03-07 08:17:28.900901628 +0000 UTC m=+1705.810068113" Mar 07 08:17:29 crc kubenswrapper[4761]: I0307 08:17:29.069088 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.1.19:3000/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 08:17:33 crc kubenswrapper[4761]: I0307 08:17:33.093045 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 07 08:17:35 crc kubenswrapper[4761]: I0307 08:17:35.188269 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:35 crc kubenswrapper[4761]: I0307 08:17:35.266294 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-676c57c97f-mmh72"] Mar 07 08:17:35 crc kubenswrapper[4761]: I0307 08:17:35.266970 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-676c57c97f-mmh72" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" containerName="heat-engine" containerID="cri-o://e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" gracePeriod=60 Mar 07 08:17:36 crc kubenswrapper[4761]: E0307 08:17:36.431203 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:36 crc kubenswrapper[4761]: E0307 08:17:36.437225 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:36 crc kubenswrapper[4761]: E0307 08:17:36.443333 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:36 crc kubenswrapper[4761]: E0307 08:17:36.443398 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-676c57c97f-mmh72" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" containerName="heat-engine" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.158891 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.213597 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm"] Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.215141 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.217266 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.219113 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.219360 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.219591 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.258190 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-b8f8c888f-mxmzb"] Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.258449 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-b8f8c888f-mxmzb" podUID="35163093-c6c8-4422-b9cc-e12645187165" containerName="heat-api" containerID="cri-o://130936491ac0d66e8bc5863e526f0ce24165cc3492d527d7ec2236bfdce93f7a" gracePeriod=60 Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.289408 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm"] Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.325114 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.325195 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd8st\" (UniqueName: \"kubernetes.io/projected/8c31bde2-d536-45b0-88c5-966abe8f4e1c-kube-api-access-gd8st\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.325256 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.325373 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.427123 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.427243 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.427293 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd8st\" (UniqueName: \"kubernetes.io/projected/8c31bde2-d536-45b0-88c5-966abe8f4e1c-kube-api-access-gd8st\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.427346 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.435609 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.436521 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.437118 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.474752 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.478500 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd8st\" (UniqueName: \"kubernetes.io/projected/8c31bde2-d536-45b0-88c5-966abe8f4e1c-kube-api-access-gd8st\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.555984 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.626479 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-759cd75854-8ppd6"] Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.627204 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-759cd75854-8ppd6" podUID="4b63b266-eb88-4bce-bb76-76dff72e1e72" containerName="heat-cfnapi" containerID="cri-o://52f0bb2496856fca4a0d012c5f9685733b249db4e1c09e4b737bfc2bc6bf9459" gracePeriod=60 Mar 07 08:17:38 crc kubenswrapper[4761]: I0307 08:17:38.743980 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm"] Mar 07 08:17:38 crc kubenswrapper[4761]: I0307 08:17:38.987787 4761 generic.go:334] "Generic (PLEG): container finished" podID="ee9f03ce-b3a6-440c-8b34-16c66dac3e00" containerID="5f85b7d01b435ebc644a16d877cffc84b9915f3cfecbcc3859a2b12cf4d1027e" exitCode=0 Mar 07 08:17:38 crc kubenswrapper[4761]: I0307 08:17:38.987860 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ee9f03ce-b3a6-440c-8b34-16c66dac3e00","Type":"ContainerDied","Data":"5f85b7d01b435ebc644a16d877cffc84b9915f3cfecbcc3859a2b12cf4d1027e"} Mar 07 08:17:38 crc kubenswrapper[4761]: I0307 08:17:38.989407 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" event={"ID":"8c31bde2-d536-45b0-88c5-966abe8f4e1c","Type":"ContainerStarted","Data":"970543db0ea3222b420283d576ab70128b0cf33c6742b873e007a25ee91ac59c"} Mar 07 08:17:38 crc kubenswrapper[4761]: I0307 08:17:38.991432 4761 generic.go:334] "Generic (PLEG): container finished" podID="894f6ffc-2563-49a6-913d-6b0b83a70fa3" containerID="3119aad3bad18654f546224b70b567df27d1eccea87fd252ee7a64a946448493" exitCode=0 Mar 07 08:17:38 crc kubenswrapper[4761]: I0307 08:17:38.991479 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"894f6ffc-2563-49a6-913d-6b0b83a70fa3","Type":"ContainerDied","Data":"3119aad3bad18654f546224b70b567df27d1eccea87fd252ee7a64a946448493"} Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.004969 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"894f6ffc-2563-49a6-913d-6b0b83a70fa3","Type":"ContainerStarted","Data":"cba577be14d63d0dced9b709f53e795e2649d180b619a4f22025f333b235659a"} Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.005437 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.012817 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ee9f03ce-b3a6-440c-8b34-16c66dac3e00","Type":"ContainerStarted","Data":"22da67e7ec21eff0037dbe7a894e1bdb6b39b1e28be0f927ba170cb142a21d41"} Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.013824 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.051939 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.051920458 podStartE2EDuration="38.051920458s" podCreationTimestamp="2026-03-07 08:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:17:40.049827456 +0000 UTC m=+1716.958993931" watchObservedRunningTime="2026-03-07 08:17:40.051920458 +0000 UTC m=+1716.961086933" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.060821 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=38.060803849 podStartE2EDuration="38.060803849s" podCreationTimestamp="2026-03-07 08:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:17:40.032098313 +0000 UTC m=+1716.941264788" watchObservedRunningTime="2026-03-07 08:17:40.060803849 +0000 UTC m=+1716.969970324" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.426302 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-s8wjr"] Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.439474 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-s8wjr"] Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.489906 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-b8f8c888f-mxmzb" podUID="35163093-c6c8-4422-b9cc-e12645187165" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.235:8004/healthcheck\": read tcp 10.217.0.2:48402->10.217.0.235:8004: read: connection reset by peer" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.564373 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-zwc7j"] Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.566436 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.575142 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.667523 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsxk6\" (UniqueName: \"kubernetes.io/projected/95dc33be-c55b-4068-be61-85ad0e5724d6-kube-api-access-hsxk6\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.667624 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-scripts\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.667656 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-combined-ca-bundle\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.667730 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-config-data\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.668695 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zwc7j"] Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.770390 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsxk6\" (UniqueName: \"kubernetes.io/projected/95dc33be-c55b-4068-be61-85ad0e5724d6-kube-api-access-hsxk6\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.770487 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-scripts\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.770516 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-combined-ca-bundle\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.770567 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-config-data\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.778815 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-scripts\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.779606 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-combined-ca-bundle\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.782040 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-config-data\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.789968 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsxk6\" (UniqueName: \"kubernetes.io/projected/95dc33be-c55b-4068-be61-85ad0e5724d6-kube-api-access-hsxk6\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.989322 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:41 crc kubenswrapper[4761]: I0307 08:17:41.066670 4761 generic.go:334] "Generic (PLEG): container finished" podID="35163093-c6c8-4422-b9cc-e12645187165" containerID="130936491ac0d66e8bc5863e526f0ce24165cc3492d527d7ec2236bfdce93f7a" exitCode=0 Mar 07 08:17:41 crc kubenswrapper[4761]: I0307 08:17:41.067276 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b8f8c888f-mxmzb" event={"ID":"35163093-c6c8-4422-b9cc-e12645187165","Type":"ContainerDied","Data":"130936491ac0d66e8bc5863e526f0ce24165cc3492d527d7ec2236bfdce93f7a"} Mar 07 08:17:41 crc kubenswrapper[4761]: I0307 08:17:41.428825 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-759cd75854-8ppd6" podUID="4b63b266-eb88-4bce-bb76-76dff72e1e72" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.236:8000/healthcheck\": read tcp 10.217.0.2:33908->10.217.0.236:8000: read: connection reset by peer" Mar 07 08:17:41 crc kubenswrapper[4761]: I0307 08:17:41.911693 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60fdff4b-2ca4-472c-8c44-40101c4a8fe1" path="/var/lib/kubelet/pods/60fdff4b-2ca4-472c-8c44-40101c4a8fe1/volumes" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.089159 4761 generic.go:334] "Generic (PLEG): container finished" podID="4b63b266-eb88-4bce-bb76-76dff72e1e72" containerID="52f0bb2496856fca4a0d012c5f9685733b249db4e1c09e4b737bfc2bc6bf9459" exitCode=0 Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.089233 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-759cd75854-8ppd6" event={"ID":"4b63b266-eb88-4bce-bb76-76dff72e1e72","Type":"ContainerDied","Data":"52f0bb2496856fca4a0d012c5f9685733b249db4e1c09e4b737bfc2bc6bf9459"} Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.091947 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b8f8c888f-mxmzb" event={"ID":"35163093-c6c8-4422-b9cc-e12645187165","Type":"ContainerDied","Data":"5d5aca546b08059075eb76b1f3ba8fe7d4bacc17011c3287975fcb34af813e4a"} Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.091987 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d5aca546b08059075eb76b1f3ba8fe7d4bacc17011c3287975fcb34af813e4a" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.179093 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.342883 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-public-tls-certs\") pod \"35163093-c6c8-4422-b9cc-e12645187165\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.343041 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data\") pod \"35163093-c6c8-4422-b9cc-e12645187165\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.343100 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wvxq\" (UniqueName: \"kubernetes.io/projected/35163093-c6c8-4422-b9cc-e12645187165-kube-api-access-2wvxq\") pod \"35163093-c6c8-4422-b9cc-e12645187165\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.343179 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data-custom\") pod \"35163093-c6c8-4422-b9cc-e12645187165\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.343220 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-combined-ca-bundle\") pod \"35163093-c6c8-4422-b9cc-e12645187165\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.343325 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-internal-tls-certs\") pod \"35163093-c6c8-4422-b9cc-e12645187165\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.355980 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "35163093-c6c8-4422-b9cc-e12645187165" (UID: "35163093-c6c8-4422-b9cc-e12645187165"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.370148 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35163093-c6c8-4422-b9cc-e12645187165-kube-api-access-2wvxq" (OuterVolumeSpecName: "kube-api-access-2wvxq") pod "35163093-c6c8-4422-b9cc-e12645187165" (UID: "35163093-c6c8-4422-b9cc-e12645187165"). InnerVolumeSpecName "kube-api-access-2wvxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.457167 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35163093-c6c8-4422-b9cc-e12645187165" (UID: "35163093-c6c8-4422-b9cc-e12645187165"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.484797 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.485765 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.485961 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wvxq\" (UniqueName: \"kubernetes.io/projected/35163093-c6c8-4422-b9cc-e12645187165-kube-api-access-2wvxq\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.492395 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zwc7j"] Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.501918 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "35163093-c6c8-4422-b9cc-e12645187165" (UID: "35163093-c6c8-4422-b9cc-e12645187165"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.534949 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "35163093-c6c8-4422-b9cc-e12645187165" (UID: "35163093-c6c8-4422-b9cc-e12645187165"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.575988 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data" (OuterVolumeSpecName: "config-data") pod "35163093-c6c8-4422-b9cc-e12645187165" (UID: "35163093-c6c8-4422-b9cc-e12645187165"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.578847 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.588561 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-internal-tls-certs\") pod \"4b63b266-eb88-4bce-bb76-76dff72e1e72\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.588732 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data\") pod \"4b63b266-eb88-4bce-bb76-76dff72e1e72\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.588873 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-public-tls-certs\") pod \"4b63b266-eb88-4bce-bb76-76dff72e1e72\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.588910 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds5bs\" (UniqueName: \"kubernetes.io/projected/4b63b266-eb88-4bce-bb76-76dff72e1e72-kube-api-access-ds5bs\") pod \"4b63b266-eb88-4bce-bb76-76dff72e1e72\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.588934 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-combined-ca-bundle\") pod \"4b63b266-eb88-4bce-bb76-76dff72e1e72\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.589061 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data-custom\") pod \"4b63b266-eb88-4bce-bb76-76dff72e1e72\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.589913 4761 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.589932 4761 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.589946 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.598128 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4b63b266-eb88-4bce-bb76-76dff72e1e72" (UID: "4b63b266-eb88-4bce-bb76-76dff72e1e72"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.598703 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b63b266-eb88-4bce-bb76-76dff72e1e72-kube-api-access-ds5bs" (OuterVolumeSpecName: "kube-api-access-ds5bs") pod "4b63b266-eb88-4bce-bb76-76dff72e1e72" (UID: "4b63b266-eb88-4bce-bb76-76dff72e1e72"). InnerVolumeSpecName "kube-api-access-ds5bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.680709 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b63b266-eb88-4bce-bb76-76dff72e1e72" (UID: "4b63b266-eb88-4bce-bb76-76dff72e1e72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.687340 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data" (OuterVolumeSpecName: "config-data") pod "4b63b266-eb88-4bce-bb76-76dff72e1e72" (UID: "4b63b266-eb88-4bce-bb76-76dff72e1e72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.691798 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds5bs\" (UniqueName: \"kubernetes.io/projected/4b63b266-eb88-4bce-bb76-76dff72e1e72-kube-api-access-ds5bs\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.691886 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.691929 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.691941 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.724943 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4b63b266-eb88-4bce-bb76-76dff72e1e72" (UID: "4b63b266-eb88-4bce-bb76-76dff72e1e72"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.731215 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4b63b266-eb88-4bce-bb76-76dff72e1e72" (UID: "4b63b266-eb88-4bce-bb76-76dff72e1e72"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.795084 4761 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.795124 4761 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.136308 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-759cd75854-8ppd6" event={"ID":"4b63b266-eb88-4bce-bb76-76dff72e1e72","Type":"ContainerDied","Data":"39064596057c52df8c571d8d99e9d09153c64bc5512fbc024127e78e3122a00c"} Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.136372 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.136400 4761 scope.go:117] "RemoveContainer" containerID="52f0bb2496856fca4a0d012c5f9685733b249db4e1c09e4b737bfc2bc6bf9459" Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.138195 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.149374 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zwc7j" event={"ID":"95dc33be-c55b-4068-be61-85ad0e5724d6","Type":"ContainerStarted","Data":"4754b18f2f40efa077d227c673086d1a130761084ba3e02bf6ea298250d54648"} Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.195005 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-b8f8c888f-mxmzb"] Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.212169 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-b8f8c888f-mxmzb"] Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.231764 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-759cd75854-8ppd6"] Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.251389 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-759cd75854-8ppd6"] Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.722712 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35163093-c6c8-4422-b9cc-e12645187165" path="/var/lib/kubelet/pods/35163093-c6c8-4422-b9cc-e12645187165/volumes" Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.724167 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b63b266-eb88-4bce-bb76-76dff72e1e72" path="/var/lib/kubelet/pods/4b63b266-eb88-4bce-bb76-76dff72e1e72/volumes" Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.768072 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.768116 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:17:46 crc kubenswrapper[4761]: E0307 08:17:46.431419 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:46 crc kubenswrapper[4761]: E0307 08:17:46.435480 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:46 crc kubenswrapper[4761]: E0307 08:17:46.436989 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:46 crc kubenswrapper[4761]: E0307 08:17:46.437037 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-676c57c97f-mmh72" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" containerName="heat-engine" Mar 07 08:17:53 crc kubenswrapper[4761]: I0307 08:17:53.308787 4761 generic.go:334] "Generic (PLEG): container finished" podID="1a968322-70c2-43b9-9842-7827fab7aa99" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" exitCode=0 Mar 07 08:17:53 crc kubenswrapper[4761]: I0307 08:17:53.308902 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-676c57c97f-mmh72" event={"ID":"1a968322-70c2-43b9-9842-7827fab7aa99","Type":"ContainerDied","Data":"e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b"} Mar 07 08:17:53 crc kubenswrapper[4761]: I0307 08:17:53.400924 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="894f6ffc-2563-49a6-913d-6b0b83a70fa3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.24:5671: connect: connection refused" Mar 07 08:17:53 crc kubenswrapper[4761]: I0307 08:17:53.491035 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:56 crc kubenswrapper[4761]: E0307 08:17:56.429754 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b is running failed: container process not found" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:56 crc kubenswrapper[4761]: E0307 08:17:56.432494 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b is running failed: container process not found" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:56 crc kubenswrapper[4761]: E0307 08:17:56.432811 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b is running failed: container process not found" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:56 crc kubenswrapper[4761]: E0307 08:17:56.432859 4761 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-676c57c97f-mmh72" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" containerName="heat-engine" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.139844 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547858-dj8v9"] Mar 07 08:18:00 crc kubenswrapper[4761]: E0307 08:18:00.140849 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b63b266-eb88-4bce-bb76-76dff72e1e72" containerName="heat-cfnapi" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.140864 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b63b266-eb88-4bce-bb76-76dff72e1e72" containerName="heat-cfnapi" Mar 07 08:18:00 crc kubenswrapper[4761]: E0307 08:18:00.140884 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35163093-c6c8-4422-b9cc-e12645187165" containerName="heat-api" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.140890 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="35163093-c6c8-4422-b9cc-e12645187165" containerName="heat-api" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.141101 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="35163093-c6c8-4422-b9cc-e12645187165" containerName="heat-api" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.141113 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b63b266-eb88-4bce-bb76-76dff72e1e72" containerName="heat-cfnapi" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.142031 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.145255 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.145344 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.145484 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.147255 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr2vr\" (UniqueName: \"kubernetes.io/projected/b8ac045a-b834-4663-9efa-3b594a7f206f-kube-api-access-lr2vr\") pod \"auto-csr-approver-29547858-dj8v9\" (UID: \"b8ac045a-b834-4663-9efa-3b594a7f206f\") " pod="openshift-infra/auto-csr-approver-29547858-dj8v9" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.158954 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547858-dj8v9"] Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.250502 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr2vr\" (UniqueName: \"kubernetes.io/projected/b8ac045a-b834-4663-9efa-3b594a7f206f-kube-api-access-lr2vr\") pod \"auto-csr-approver-29547858-dj8v9\" (UID: \"b8ac045a-b834-4663-9efa-3b594a7f206f\") " pod="openshift-infra/auto-csr-approver-29547858-dj8v9" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.273329 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr2vr\" (UniqueName: \"kubernetes.io/projected/b8ac045a-b834-4663-9efa-3b594a7f206f-kube-api-access-lr2vr\") pod \"auto-csr-approver-29547858-dj8v9\" (UID: \"b8ac045a-b834-4663-9efa-3b594a7f206f\") " pod="openshift-infra/auto-csr-approver-29547858-dj8v9" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.474363 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" Mar 07 08:18:00 crc kubenswrapper[4761]: E0307 08:18:00.646374 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Mar 07 08:18:00 crc kubenswrapper[4761]: E0307 08:18:00.646859 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 08:18:00 crc kubenswrapper[4761]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Mar 07 08:18:00 crc kubenswrapper[4761]: - hosts: all Mar 07 08:18:00 crc kubenswrapper[4761]: strategy: linear Mar 07 08:18:00 crc kubenswrapper[4761]: tasks: Mar 07 08:18:00 crc kubenswrapper[4761]: - name: Enable podified-repos Mar 07 08:18:00 crc kubenswrapper[4761]: become: true Mar 07 08:18:00 crc kubenswrapper[4761]: ansible.builtin.shell: | Mar 07 08:18:00 crc kubenswrapper[4761]: set -euxo pipefail Mar 07 08:18:00 crc kubenswrapper[4761]: pushd /var/tmp Mar 07 08:18:00 crc kubenswrapper[4761]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Mar 07 08:18:00 crc kubenswrapper[4761]: pushd repo-setup-main Mar 07 08:18:00 crc kubenswrapper[4761]: python3 -m venv ./venv Mar 07 08:18:00 crc kubenswrapper[4761]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Mar 07 08:18:00 crc kubenswrapper[4761]: ./venv/bin/repo-setup current-podified -b antelope Mar 07 08:18:00 crc kubenswrapper[4761]: popd Mar 07 08:18:00 crc kubenswrapper[4761]: rm -rf repo-setup-main Mar 07 08:18:00 crc kubenswrapper[4761]: Mar 07 08:18:00 crc kubenswrapper[4761]: Mar 07 08:18:00 crc kubenswrapper[4761]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Mar 07 08:18:00 crc kubenswrapper[4761]: edpm_override_hosts: openstack-edpm-ipam Mar 07 08:18:00 crc kubenswrapper[4761]: edpm_service_type: repo-setup Mar 07 08:18:00 crc kubenswrapper[4761]: Mar 07 08:18:00 crc kubenswrapper[4761]: Mar 07 08:18:00 crc kubenswrapper[4761]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gd8st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm_openstack(8c31bde2-d536-45b0-88c5-966abe8f4e1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Mar 07 08:18:00 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 08:18:00 crc kubenswrapper[4761]: E0307 08:18:00.648175 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" podUID="8c31bde2-d536-45b0-88c5-966abe8f4e1c" Mar 07 08:18:01 crc kubenswrapper[4761]: E0307 08:18:01.425356 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" podUID="8c31bde2-d536-45b0-88c5-966abe8f4e1c" Mar 07 08:18:01 crc kubenswrapper[4761]: E0307 08:18:01.524337 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Mar 07 08:18:01 crc kubenswrapper[4761]: E0307 08:18:01.524389 4761 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Mar 07 08:18:01 crc kubenswrapper[4761]: E0307 08:18:01.524586 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:aodh-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:AodhPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:AodhPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:aodh-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsxk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42402,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod aodh-db-sync-zwc7j_openstack(95dc33be-c55b-4068-be61-85ad0e5724d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:18:01 crc kubenswrapper[4761]: E0307 08:18:01.525754 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/aodh-db-sync-zwc7j" podUID="95dc33be-c55b-4068-be61-85ad0e5724d6" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.647507 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.786057 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data\") pod \"1a968322-70c2-43b9-9842-7827fab7aa99\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.786585 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-combined-ca-bundle\") pod \"1a968322-70c2-43b9-9842-7827fab7aa99\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.786631 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-994vx\" (UniqueName: \"kubernetes.io/projected/1a968322-70c2-43b9-9842-7827fab7aa99-kube-api-access-994vx\") pod \"1a968322-70c2-43b9-9842-7827fab7aa99\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.786687 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data-custom\") pod \"1a968322-70c2-43b9-9842-7827fab7aa99\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.812172 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1a968322-70c2-43b9-9842-7827fab7aa99" (UID: "1a968322-70c2-43b9-9842-7827fab7aa99"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.812689 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a968322-70c2-43b9-9842-7827fab7aa99-kube-api-access-994vx" (OuterVolumeSpecName: "kube-api-access-994vx") pod "1a968322-70c2-43b9-9842-7827fab7aa99" (UID: "1a968322-70c2-43b9-9842-7827fab7aa99"). InnerVolumeSpecName "kube-api-access-994vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.835998 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a968322-70c2-43b9-9842-7827fab7aa99" (UID: "1a968322-70c2-43b9-9842-7827fab7aa99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.852597 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data" (OuterVolumeSpecName: "config-data") pod "1a968322-70c2-43b9-9842-7827fab7aa99" (UID: "1a968322-70c2-43b9-9842-7827fab7aa99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.890457 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.890496 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-994vx\" (UniqueName: \"kubernetes.io/projected/1a968322-70c2-43b9-9842-7827fab7aa99-kube-api-access-994vx\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.890514 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.890528 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:02 crc kubenswrapper[4761]: I0307 08:18:02.033306 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547858-dj8v9"] Mar 07 08:18:02 crc kubenswrapper[4761]: I0307 08:18:02.435683 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-676c57c97f-mmh72" event={"ID":"1a968322-70c2-43b9-9842-7827fab7aa99","Type":"ContainerDied","Data":"00ef68fbd07a8b813907cee43a3091207e90aeb988a48b6c328838e9d4ad0ea5"} Mar 07 08:18:02 crc kubenswrapper[4761]: I0307 08:18:02.435760 4761 scope.go:117] "RemoveContainer" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" Mar 07 08:18:02 crc kubenswrapper[4761]: I0307 08:18:02.435871 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:18:02 crc kubenswrapper[4761]: I0307 08:18:02.440998 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" event={"ID":"b8ac045a-b834-4663-9efa-3b594a7f206f","Type":"ContainerStarted","Data":"518f38dcbbc3b6b6a121fafb31a9357f4e3a4eebaa06d38e36f7216f0dfea5cc"} Mar 07 08:18:02 crc kubenswrapper[4761]: E0307 08:18:02.443491 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested\\\"\"" pod="openstack/aodh-db-sync-zwc7j" podUID="95dc33be-c55b-4068-be61-85ad0e5724d6" Mar 07 08:18:02 crc kubenswrapper[4761]: I0307 08:18:02.498996 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-676c57c97f-mmh72"] Mar 07 08:18:02 crc kubenswrapper[4761]: I0307 08:18:02.510470 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-676c57c97f-mmh72"] Mar 07 08:18:03 crc kubenswrapper[4761]: I0307 08:18:03.400909 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Mar 07 08:18:03 crc kubenswrapper[4761]: I0307 08:18:03.469882 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:18:03 crc kubenswrapper[4761]: I0307 08:18:03.733297 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" path="/var/lib/kubelet/pods/1a968322-70c2-43b9-9842-7827fab7aa99/volumes" Mar 07 08:18:04 crc kubenswrapper[4761]: I0307 08:18:04.512433 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" event={"ID":"b8ac045a-b834-4663-9efa-3b594a7f206f","Type":"ContainerStarted","Data":"4876e046a7b6a600c7be9a7f1e443d545d71c97d3c11f39a16f16d32c1322116"} Mar 07 08:18:04 crc kubenswrapper[4761]: I0307 08:18:04.548634 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" podStartSLOduration=3.44862044 podStartE2EDuration="4.548610578s" podCreationTimestamp="2026-03-07 08:18:00 +0000 UTC" firstStartedPulling="2026-03-07 08:18:02.036356715 +0000 UTC m=+1738.945523190" lastFinishedPulling="2026-03-07 08:18:03.136346843 +0000 UTC m=+1740.045513328" observedRunningTime="2026-03-07 08:18:04.536921627 +0000 UTC m=+1741.446088102" watchObservedRunningTime="2026-03-07 08:18:04.548610578 +0000 UTC m=+1741.457777053" Mar 07 08:18:05 crc kubenswrapper[4761]: I0307 08:18:05.526925 4761 generic.go:334] "Generic (PLEG): container finished" podID="b8ac045a-b834-4663-9efa-3b594a7f206f" containerID="4876e046a7b6a600c7be9a7f1e443d545d71c97d3c11f39a16f16d32c1322116" exitCode=0 Mar 07 08:18:05 crc kubenswrapper[4761]: I0307 08:18:05.527011 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" event={"ID":"b8ac045a-b834-4663-9efa-3b594a7f206f","Type":"ContainerDied","Data":"4876e046a7b6a600c7be9a7f1e443d545d71c97d3c11f39a16f16d32c1322116"} Mar 07 08:18:06 crc kubenswrapper[4761]: I0307 08:18:06.980934 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" Mar 07 08:18:07 crc kubenswrapper[4761]: I0307 08:18:07.140791 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr2vr\" (UniqueName: \"kubernetes.io/projected/b8ac045a-b834-4663-9efa-3b594a7f206f-kube-api-access-lr2vr\") pod \"b8ac045a-b834-4663-9efa-3b594a7f206f\" (UID: \"b8ac045a-b834-4663-9efa-3b594a7f206f\") " Mar 07 08:18:07 crc kubenswrapper[4761]: I0307 08:18:07.147991 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ac045a-b834-4663-9efa-3b594a7f206f-kube-api-access-lr2vr" (OuterVolumeSpecName: "kube-api-access-lr2vr") pod "b8ac045a-b834-4663-9efa-3b594a7f206f" (UID: "b8ac045a-b834-4663-9efa-3b594a7f206f"). InnerVolumeSpecName "kube-api-access-lr2vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:07 crc kubenswrapper[4761]: I0307 08:18:07.244585 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr2vr\" (UniqueName: \"kubernetes.io/projected/b8ac045a-b834-4663-9efa-3b594a7f206f-kube-api-access-lr2vr\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:07 crc kubenswrapper[4761]: I0307 08:18:07.550754 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" Mar 07 08:18:07 crc kubenswrapper[4761]: I0307 08:18:07.550695 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" event={"ID":"b8ac045a-b834-4663-9efa-3b594a7f206f","Type":"ContainerDied","Data":"518f38dcbbc3b6b6a121fafb31a9357f4e3a4eebaa06d38e36f7216f0dfea5cc"} Mar 07 08:18:07 crc kubenswrapper[4761]: I0307 08:18:07.551097 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="518f38dcbbc3b6b6a121fafb31a9357f4e3a4eebaa06d38e36f7216f0dfea5cc" Mar 07 08:18:07 crc kubenswrapper[4761]: I0307 08:18:07.728187 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerName="rabbitmq" containerID="cri-o://1d26b75a698ad04d687e4077e53dc96a9d1ef67c0216076f5debf22ce97e1f0d" gracePeriod=604796 Mar 07 08:18:08 crc kubenswrapper[4761]: I0307 08:18:08.062870 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547852-bt6bz"] Mar 07 08:18:08 crc kubenswrapper[4761]: I0307 08:18:08.077526 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547852-bt6bz"] Mar 07 08:18:09 crc kubenswrapper[4761]: I0307 08:18:09.745336 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd21ae8c-0b60-48ed-b287-3f861535b5d6" path="/var/lib/kubelet/pods/dd21ae8c-0b60-48ed-b287-3f861535b5d6/volumes" Mar 07 08:18:13 crc kubenswrapper[4761]: I0307 08:18:13.768550 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:18:13 crc kubenswrapper[4761]: I0307 08:18:13.769105 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:18:14 crc kubenswrapper[4761]: I0307 08:18:14.143571 4761 scope.go:117] "RemoveContainer" containerID="5468dd9272cfb94f64e60fd95f4a2837460a1196ebd1cf21d856f7fa46025406" Mar 07 08:18:14 crc kubenswrapper[4761]: I0307 08:18:14.255496 4761 scope.go:117] "RemoveContainer" containerID="d5c3fbc73137202537359f63da2e062c34122ec37fea57f7f56fe096047b762b" Mar 07 08:18:14 crc kubenswrapper[4761]: I0307 08:18:14.357094 4761 scope.go:117] "RemoveContainer" containerID="772989b70eec3b548dee037094b02d89023b3589a3f2ed8a8189fbe364d5c076" Mar 07 08:18:14 crc kubenswrapper[4761]: I0307 08:18:14.419408 4761 scope.go:117] "RemoveContainer" containerID="678c4a7b6bb0f4d19eafaa5654456b93d7c9f779bbb622caf1e8268648186ea9" Mar 07 08:18:15 crc kubenswrapper[4761]: I0307 08:18:15.232873 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Mar 07 08:18:15 crc kubenswrapper[4761]: I0307 08:18:15.540294 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:18:16 crc kubenswrapper[4761]: I0307 08:18:16.678816 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" event={"ID":"8c31bde2-d536-45b0-88c5-966abe8f4e1c","Type":"ContainerStarted","Data":"3f8696b5a6d95d32cf68c57457bd7f66828570b12e850d47a78a4af1935159f4"} Mar 07 08:18:16 crc kubenswrapper[4761]: I0307 08:18:16.701792 4761 generic.go:334] "Generic (PLEG): container finished" podID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerID="1d26b75a698ad04d687e4077e53dc96a9d1ef67c0216076f5debf22ce97e1f0d" exitCode=0 Mar 07 08:18:16 crc kubenswrapper[4761]: I0307 08:18:16.701877 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"663244dc-847b-4dda-9c2c-4cae23e48e64","Type":"ContainerDied","Data":"1d26b75a698ad04d687e4077e53dc96a9d1ef67c0216076f5debf22ce97e1f0d"} Mar 07 08:18:16 crc kubenswrapper[4761]: I0307 08:18:16.710176 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" podStartSLOduration=2.908963532 podStartE2EDuration="39.71015522s" podCreationTimestamp="2026-03-07 08:17:37 +0000 UTC" firstStartedPulling="2026-03-07 08:17:38.735834581 +0000 UTC m=+1715.645001056" lastFinishedPulling="2026-03-07 08:18:15.537026259 +0000 UTC m=+1752.446192744" observedRunningTime="2026-03-07 08:18:16.701458679 +0000 UTC m=+1753.610625154" watchObservedRunningTime="2026-03-07 08:18:16.71015522 +0000 UTC m=+1753.619321695" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.397490 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.427380 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.497004 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-plugins-conf\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.497067 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n62hs\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-kube-api-access-n62hs\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.497191 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/663244dc-847b-4dda-9c2c-4cae23e48e64-erlang-cookie-secret\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.497237 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-config-data\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.497335 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-confd\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.497385 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-erlang-cookie\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.501944 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.502005 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-tls\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.502041 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-server-conf\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.502065 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-plugins\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.502108 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/663244dc-847b-4dda-9c2c-4cae23e48e64-pod-info\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.507507 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.510026 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.510417 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.543092 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663244dc-847b-4dda-9c2c-4cae23e48e64-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.543336 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.543758 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/663244dc-847b-4dda-9c2c-4cae23e48e64-pod-info" (OuterVolumeSpecName: "pod-info") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.547609 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-kube-api-access-n62hs" (OuterVolumeSpecName: "kube-api-access-n62hs") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "kube-api-access-n62hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.606777 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.607098 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.609214 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.609305 4761 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/663244dc-847b-4dda-9c2c-4cae23e48e64-pod-info\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.609406 4761 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.609469 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n62hs\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-kube-api-access-n62hs\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.609538 4761 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/663244dc-847b-4dda-9c2c-4cae23e48e64-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.615555 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-config-data" (OuterVolumeSpecName: "config-data") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.683422 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51" (OuterVolumeSpecName: "persistence") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.688333 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-server-conf" (OuterVolumeSpecName: "server-conf") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.721783 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.721868 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") on node \"crc\" " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.721889 4761 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-server-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.724745 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.737216 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"663244dc-847b-4dda-9c2c-4cae23e48e64","Type":"ContainerDied","Data":"1abab7db156cafa869043228964f8c2a04ac722a8f9439b7f2f97babcd69aa26"} Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.737267 4761 scope.go:117] "RemoveContainer" containerID="1d26b75a698ad04d687e4077e53dc96a9d1ef67c0216076f5debf22ce97e1f0d" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.759036 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.760142 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51") on node "crc" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.779149 4761 scope.go:117] "RemoveContainer" containerID="cac1f058abec00ed564c939ed9e3b5f26abb1b9f3f9688745486b048618d23c8" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.823972 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.829061 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.926757 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.092057 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.117929 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.185637 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:18:18 crc kubenswrapper[4761]: E0307 08:18:18.186238 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerName="rabbitmq" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.186256 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerName="rabbitmq" Mar 07 08:18:18 crc kubenswrapper[4761]: E0307 08:18:18.186288 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" containerName="heat-engine" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.186296 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" containerName="heat-engine" Mar 07 08:18:18 crc kubenswrapper[4761]: E0307 08:18:18.186319 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerName="setup-container" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.186330 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerName="setup-container" Mar 07 08:18:18 crc kubenswrapper[4761]: E0307 08:18:18.186371 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ac045a-b834-4663-9efa-3b594a7f206f" containerName="oc" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.186379 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ac045a-b834-4663-9efa-3b594a7f206f" containerName="oc" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.186678 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" containerName="heat-engine" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.186708 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerName="rabbitmq" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.186748 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ac045a-b834-4663-9efa-3b594a7f206f" containerName="oc" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.188356 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.232899 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.347503 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-config-data\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.347876 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.348009 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.348138 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-server-conf\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.348249 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.348346 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-pod-info\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.348429 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.348656 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.349102 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.349313 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.349461 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqkv9\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-kube-api-access-kqkv9\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451417 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-config-data\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451458 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451480 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451497 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-server-conf\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451535 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451570 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-pod-info\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451591 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451651 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451766 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451845 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.452513 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.452765 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.452790 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqkv9\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-kube-api-access-kqkv9\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.452860 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.453251 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-server-conf\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.453522 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-config-data\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.455827 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.456089 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.457196 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.457237 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b440d898d7256a75603c2b0b9c323ce660ab24929494b6992860ef443ff68edd/globalmount\"" pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.474381 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-pod-info\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.474782 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.479980 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqkv9\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-kube-api-access-kqkv9\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.559485 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.748518 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zwc7j" event={"ID":"95dc33be-c55b-4068-be61-85ad0e5724d6","Type":"ContainerStarted","Data":"98d4746bd821209a9116a5de380487afd770d79a6957041428405f00bc1c38f2"} Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.780653 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-zwc7j" podStartSLOduration=3.882354218 podStartE2EDuration="38.780627307s" podCreationTimestamp="2026-03-07 08:17:40 +0000 UTC" firstStartedPulling="2026-03-07 08:17:42.519125208 +0000 UTC m=+1719.428291683" lastFinishedPulling="2026-03-07 08:18:17.417398297 +0000 UTC m=+1754.326564772" observedRunningTime="2026-03-07 08:18:18.766018792 +0000 UTC m=+1755.675185267" watchObservedRunningTime="2026-03-07 08:18:18.780627307 +0000 UTC m=+1755.689793782" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.819779 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 07 08:18:19 crc kubenswrapper[4761]: I0307 08:18:19.417750 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:18:19 crc kubenswrapper[4761]: I0307 08:18:19.722322 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" path="/var/lib/kubelet/pods/663244dc-847b-4dda-9c2c-4cae23e48e64/volumes" Mar 07 08:18:19 crc kubenswrapper[4761]: I0307 08:18:19.761442 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc","Type":"ContainerStarted","Data":"9e335d05f97353c92af0477163b189fd50e357642797edbea3a04adaca659465"} Mar 07 08:18:21 crc kubenswrapper[4761]: I0307 08:18:21.798168 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc","Type":"ContainerStarted","Data":"30f27239d9d86a6951cb86e64cf67a28355de987e9b880514da23ead2161865e"} Mar 07 08:18:21 crc kubenswrapper[4761]: I0307 08:18:21.801903 4761 generic.go:334] "Generic (PLEG): container finished" podID="95dc33be-c55b-4068-be61-85ad0e5724d6" containerID="98d4746bd821209a9116a5de380487afd770d79a6957041428405f00bc1c38f2" exitCode=0 Mar 07 08:18:21 crc kubenswrapper[4761]: I0307 08:18:21.801948 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zwc7j" event={"ID":"95dc33be-c55b-4068-be61-85ad0e5724d6","Type":"ContainerDied","Data":"98d4746bd821209a9116a5de380487afd770d79a6957041428405f00bc1c38f2"} Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.215789 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.274615 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-config-data\") pod \"95dc33be-c55b-4068-be61-85ad0e5724d6\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.274671 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-scripts\") pod \"95dc33be-c55b-4068-be61-85ad0e5724d6\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.274787 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-combined-ca-bundle\") pod \"95dc33be-c55b-4068-be61-85ad0e5724d6\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.274882 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsxk6\" (UniqueName: \"kubernetes.io/projected/95dc33be-c55b-4068-be61-85ad0e5724d6-kube-api-access-hsxk6\") pod \"95dc33be-c55b-4068-be61-85ad0e5724d6\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.282449 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-scripts" (OuterVolumeSpecName: "scripts") pod "95dc33be-c55b-4068-be61-85ad0e5724d6" (UID: "95dc33be-c55b-4068-be61-85ad0e5724d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.283175 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95dc33be-c55b-4068-be61-85ad0e5724d6-kube-api-access-hsxk6" (OuterVolumeSpecName: "kube-api-access-hsxk6") pod "95dc33be-c55b-4068-be61-85ad0e5724d6" (UID: "95dc33be-c55b-4068-be61-85ad0e5724d6"). InnerVolumeSpecName "kube-api-access-hsxk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.310245 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-config-data" (OuterVolumeSpecName: "config-data") pod "95dc33be-c55b-4068-be61-85ad0e5724d6" (UID: "95dc33be-c55b-4068-be61-85ad0e5724d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.322663 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95dc33be-c55b-4068-be61-85ad0e5724d6" (UID: "95dc33be-c55b-4068-be61-85ad0e5724d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.377282 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.377311 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.377320 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.377330 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsxk6\" (UniqueName: \"kubernetes.io/projected/95dc33be-c55b-4068-be61-85ad0e5724d6-kube-api-access-hsxk6\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.855435 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zwc7j" event={"ID":"95dc33be-c55b-4068-be61-85ad0e5724d6","Type":"ContainerDied","Data":"4754b18f2f40efa077d227c673086d1a130761084ba3e02bf6ea298250d54648"} Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.855488 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.855493 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4754b18f2f40efa077d227c673086d1a130761084ba3e02bf6ea298250d54648" Mar 07 08:18:25 crc kubenswrapper[4761]: I0307 08:18:25.600613 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 07 08:18:25 crc kubenswrapper[4761]: I0307 08:18:25.601259 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-api" containerID="cri-o://9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45" gracePeriod=30 Mar 07 08:18:25 crc kubenswrapper[4761]: I0307 08:18:25.601806 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-notifier" containerID="cri-o://f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862" gracePeriod=30 Mar 07 08:18:25 crc kubenswrapper[4761]: I0307 08:18:25.601837 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-evaluator" containerID="cri-o://a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0" gracePeriod=30 Mar 07 08:18:25 crc kubenswrapper[4761]: I0307 08:18:25.601806 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-listener" containerID="cri-o://234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96" gracePeriod=30 Mar 07 08:18:26 crc kubenswrapper[4761]: I0307 08:18:26.891117 4761 generic.go:334] "Generic (PLEG): container finished" podID="887264dd-6715-4050-a798-9a88572bab63" containerID="a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0" exitCode=0 Mar 07 08:18:26 crc kubenswrapper[4761]: I0307 08:18:26.892325 4761 generic.go:334] "Generic (PLEG): container finished" podID="887264dd-6715-4050-a798-9a88572bab63" containerID="9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45" exitCode=0 Mar 07 08:18:26 crc kubenswrapper[4761]: I0307 08:18:26.891205 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerDied","Data":"a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0"} Mar 07 08:18:26 crc kubenswrapper[4761]: I0307 08:18:26.892419 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerDied","Data":"9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45"} Mar 07 08:18:27 crc kubenswrapper[4761]: I0307 08:18:27.906027 4761 generic.go:334] "Generic (PLEG): container finished" podID="8c31bde2-d536-45b0-88c5-966abe8f4e1c" containerID="3f8696b5a6d95d32cf68c57457bd7f66828570b12e850d47a78a4af1935159f4" exitCode=0 Mar 07 08:18:27 crc kubenswrapper[4761]: I0307 08:18:27.906188 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" event={"ID":"8c31bde2-d536-45b0-88c5-966abe8f4e1c","Type":"ContainerDied","Data":"3f8696b5a6d95d32cf68c57457bd7f66828570b12e850d47a78a4af1935159f4"} Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.462117 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.520907 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-inventory\") pod \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.521284 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-repo-setup-combined-ca-bundle\") pod \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.521355 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd8st\" (UniqueName: \"kubernetes.io/projected/8c31bde2-d536-45b0-88c5-966abe8f4e1c-kube-api-access-gd8st\") pod \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.521382 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-ssh-key-openstack-edpm-ipam\") pod \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.527602 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c31bde2-d536-45b0-88c5-966abe8f4e1c-kube-api-access-gd8st" (OuterVolumeSpecName: "kube-api-access-gd8st") pod "8c31bde2-d536-45b0-88c5-966abe8f4e1c" (UID: "8c31bde2-d536-45b0-88c5-966abe8f4e1c"). InnerVolumeSpecName "kube-api-access-gd8st". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.528151 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8c31bde2-d536-45b0-88c5-966abe8f4e1c" (UID: "8c31bde2-d536-45b0-88c5-966abe8f4e1c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.563959 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c31bde2-d536-45b0-88c5-966abe8f4e1c" (UID: "8c31bde2-d536-45b0-88c5-966abe8f4e1c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.570150 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-inventory" (OuterVolumeSpecName: "inventory") pod "8c31bde2-d536-45b0-88c5-966abe8f4e1c" (UID: "8c31bde2-d536-45b0-88c5-966abe8f4e1c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.623295 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.623353 4761 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.623372 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd8st\" (UniqueName: \"kubernetes.io/projected/8c31bde2-d536-45b0-88c5-966abe8f4e1c-kube-api-access-gd8st\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.623384 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.931915 4761 generic.go:334] "Generic (PLEG): container finished" podID="887264dd-6715-4050-a798-9a88572bab63" containerID="234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96" exitCode=0 Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.931995 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerDied","Data":"234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96"} Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.934002 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" event={"ID":"8c31bde2-d536-45b0-88c5-966abe8f4e1c","Type":"ContainerDied","Data":"970543db0ea3222b420283d576ab70128b0cf33c6742b873e007a25ee91ac59c"} Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.934027 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="970543db0ea3222b420283d576ab70128b0cf33c6742b873e007a25ee91ac59c" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.934401 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.082833 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk"] Mar 07 08:18:30 crc kubenswrapper[4761]: E0307 08:18:30.083340 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c31bde2-d536-45b0-88c5-966abe8f4e1c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.083359 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c31bde2-d536-45b0-88c5-966abe8f4e1c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 07 08:18:30 crc kubenswrapper[4761]: E0307 08:18:30.083378 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95dc33be-c55b-4068-be61-85ad0e5724d6" containerName="aodh-db-sync" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.083387 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="95dc33be-c55b-4068-be61-85ad0e5724d6" containerName="aodh-db-sync" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.083598 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c31bde2-d536-45b0-88c5-966abe8f4e1c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.083636 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="95dc33be-c55b-4068-be61-85ad0e5724d6" containerName="aodh-db-sync" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.084389 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.090125 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.090466 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.090694 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.090839 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.103932 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk"] Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.134298 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.134450 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.134574 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlcbw\" (UniqueName: \"kubernetes.io/projected/7fb04149-6828-4d2d-ae60-8425380b1219-kube-api-access-tlcbw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.235826 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.235921 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlcbw\" (UniqueName: \"kubernetes.io/projected/7fb04149-6828-4d2d-ae60-8425380b1219-kube-api-access-tlcbw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.236022 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.239704 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.240096 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.250675 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlcbw\" (UniqueName: \"kubernetes.io/projected/7fb04149-6828-4d2d-ae60-8425380b1219-kube-api-access-tlcbw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.453184 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.835951 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.852829 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-public-tls-certs\") pod \"887264dd-6715-4050-a798-9a88572bab63\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.852929 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddsfc\" (UniqueName: \"kubernetes.io/projected/887264dd-6715-4050-a798-9a88572bab63-kube-api-access-ddsfc\") pod \"887264dd-6715-4050-a798-9a88572bab63\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.852959 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-config-data\") pod \"887264dd-6715-4050-a798-9a88572bab63\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.853018 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-internal-tls-certs\") pod \"887264dd-6715-4050-a798-9a88572bab63\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.853132 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-scripts\") pod \"887264dd-6715-4050-a798-9a88572bab63\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.853218 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-combined-ca-bundle\") pod \"887264dd-6715-4050-a798-9a88572bab63\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.879205 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/887264dd-6715-4050-a798-9a88572bab63-kube-api-access-ddsfc" (OuterVolumeSpecName: "kube-api-access-ddsfc") pod "887264dd-6715-4050-a798-9a88572bab63" (UID: "887264dd-6715-4050-a798-9a88572bab63"). InnerVolumeSpecName "kube-api-access-ddsfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.885537 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-scripts" (OuterVolumeSpecName: "scripts") pod "887264dd-6715-4050-a798-9a88572bab63" (UID: "887264dd-6715-4050-a798-9a88572bab63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.939959 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "887264dd-6715-4050-a798-9a88572bab63" (UID: "887264dd-6715-4050-a798-9a88572bab63"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.949865 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "887264dd-6715-4050-a798-9a88572bab63" (UID: "887264dd-6715-4050-a798-9a88572bab63"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.953789 4761 generic.go:334] "Generic (PLEG): container finished" podID="887264dd-6715-4050-a798-9a88572bab63" containerID="f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862" exitCode=0 Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.953845 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerDied","Data":"f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862"} Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.953871 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerDied","Data":"f8d557621f70be05e00a45ad085f517dd06d59ecf5a8c7716d6f59d81155a216"} Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.953886 4761 scope.go:117] "RemoveContainer" containerID="234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.954022 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.957972 4761 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.957988 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddsfc\" (UniqueName: \"kubernetes.io/projected/887264dd-6715-4050-a798-9a88572bab63-kube-api-access-ddsfc\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.957999 4761 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.958009 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.996337 4761 scope.go:117] "RemoveContainer" containerID="f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.009055 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-config-data" (OuterVolumeSpecName: "config-data") pod "887264dd-6715-4050-a798-9a88572bab63" (UID: "887264dd-6715-4050-a798-9a88572bab63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.019019 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "887264dd-6715-4050-a798-9a88572bab63" (UID: "887264dd-6715-4050-a798-9a88572bab63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.023993 4761 scope.go:117] "RemoveContainer" containerID="a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.053800 4761 scope.go:117] "RemoveContainer" containerID="9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.072613 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.072645 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.088315 4761 scope.go:117] "RemoveContainer" containerID="234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96" Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.088816 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96\": container with ID starting with 234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96 not found: ID does not exist" containerID="234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.088841 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96"} err="failed to get container status \"234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96\": rpc error: code = NotFound desc = could not find container \"234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96\": container with ID starting with 234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96 not found: ID does not exist" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.088859 4761 scope.go:117] "RemoveContainer" containerID="f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862" Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.089190 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862\": container with ID starting with f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862 not found: ID does not exist" containerID="f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.089231 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862"} err="failed to get container status \"f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862\": rpc error: code = NotFound desc = could not find container \"f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862\": container with ID starting with f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862 not found: ID does not exist" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.089263 4761 scope.go:117] "RemoveContainer" containerID="a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0" Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.089693 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0\": container with ID starting with a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0 not found: ID does not exist" containerID="a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.089741 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0"} err="failed to get container status \"a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0\": rpc error: code = NotFound desc = could not find container \"a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0\": container with ID starting with a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0 not found: ID does not exist" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.089759 4761 scope.go:117] "RemoveContainer" containerID="9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45" Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.090032 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45\": container with ID starting with 9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45 not found: ID does not exist" containerID="9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.090054 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45"} err="failed to get container status \"9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45\": rpc error: code = NotFound desc = could not find container \"9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45\": container with ID starting with 9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45 not found: ID does not exist" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.309133 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.334824 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.354442 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.355073 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-notifier" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355090 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-notifier" Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.355102 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-evaluator" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355109 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-evaluator" Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.355127 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-api" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355133 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-api" Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.355151 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-listener" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355157 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-listener" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355503 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-listener" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355541 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-api" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355553 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-evaluator" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355564 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-notifier" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.357900 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.362989 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-wcdfq" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.366998 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.367069 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.367097 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.367160 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.380335 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-config-data\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.380393 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.380418 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-public-tls-certs\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.380435 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8mbd\" (UniqueName: \"kubernetes.io/projected/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-kube-api-access-d8mbd\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.380510 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-scripts\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.380570 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-internal-tls-certs\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.381023 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.482241 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-scripts\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.482323 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-internal-tls-certs\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.482415 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-config-data\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.482456 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.482476 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-public-tls-certs\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.482496 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8mbd\" (UniqueName: \"kubernetes.io/projected/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-kube-api-access-d8mbd\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.487335 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-config-data\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.487386 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.487464 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-internal-tls-certs\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.501422 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-public-tls-certs\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.502649 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-scripts\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.504654 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8mbd\" (UniqueName: \"kubernetes.io/projected/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-kube-api-access-d8mbd\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.580642 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk"] Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.688480 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.723850 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="887264dd-6715-4050-a798-9a88572bab63" path="/var/lib/kubelet/pods/887264dd-6715-4050-a798-9a88572bab63/volumes" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.970888 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" event={"ID":"7fb04149-6828-4d2d-ae60-8425380b1219","Type":"ContainerStarted","Data":"58cceb2cb86d3dda5cda0566d29655beefb3fa62450f22144f0feb19c99a827d"} Mar 07 08:18:32 crc kubenswrapper[4761]: W0307 08:18:32.207775 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7462784_7bd0_4cfe_96f0_e3c9bef7c4ff.slice/crio-022f87de2a7ccecc4a9769ac9fb0e4cdc44ae65578b586ef8b578eb30284c0e6 WatchSource:0}: Error finding container 022f87de2a7ccecc4a9769ac9fb0e4cdc44ae65578b586ef8b578eb30284c0e6: Status 404 returned error can't find the container with id 022f87de2a7ccecc4a9769ac9fb0e4cdc44ae65578b586ef8b578eb30284c0e6 Mar 07 08:18:32 crc kubenswrapper[4761]: I0307 08:18:32.209864 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 07 08:18:32 crc kubenswrapper[4761]: I0307 08:18:32.984800 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" event={"ID":"7fb04149-6828-4d2d-ae60-8425380b1219","Type":"ContainerStarted","Data":"36150e3a809a7828b0c06d94ae31e91b5fcb42a9a84f2ae702ad1155eda161f8"} Mar 07 08:18:32 crc kubenswrapper[4761]: I0307 08:18:32.988253 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff","Type":"ContainerStarted","Data":"815ba6d9720a5d308af10a27bf84aa0b10d6fec32e9125202d922865383ec8e0"} Mar 07 08:18:32 crc kubenswrapper[4761]: I0307 08:18:32.988297 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff","Type":"ContainerStarted","Data":"022f87de2a7ccecc4a9769ac9fb0e4cdc44ae65578b586ef8b578eb30284c0e6"} Mar 07 08:18:33 crc kubenswrapper[4761]: I0307 08:18:33.016181 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" podStartSLOduration=2.541315073 podStartE2EDuration="3.016165616s" podCreationTimestamp="2026-03-07 08:18:30 +0000 UTC" firstStartedPulling="2026-03-07 08:18:31.58964483 +0000 UTC m=+1768.498811305" lastFinishedPulling="2026-03-07 08:18:32.064495373 +0000 UTC m=+1768.973661848" observedRunningTime="2026-03-07 08:18:33.00024774 +0000 UTC m=+1769.909414225" watchObservedRunningTime="2026-03-07 08:18:33.016165616 +0000 UTC m=+1769.925332091" Mar 07 08:18:34 crc kubenswrapper[4761]: I0307 08:18:34.001533 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff","Type":"ContainerStarted","Data":"68b9a2d1646a095a207a104fac118b3f9b56b7de248dc67366711f9f3a91a572"} Mar 07 08:18:35 crc kubenswrapper[4761]: I0307 08:18:35.012757 4761 generic.go:334] "Generic (PLEG): container finished" podID="7fb04149-6828-4d2d-ae60-8425380b1219" containerID="36150e3a809a7828b0c06d94ae31e91b5fcb42a9a84f2ae702ad1155eda161f8" exitCode=0 Mar 07 08:18:35 crc kubenswrapper[4761]: I0307 08:18:35.012860 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" event={"ID":"7fb04149-6828-4d2d-ae60-8425380b1219","Type":"ContainerDied","Data":"36150e3a809a7828b0c06d94ae31e91b5fcb42a9a84f2ae702ad1155eda161f8"} Mar 07 08:18:35 crc kubenswrapper[4761]: I0307 08:18:35.015673 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff","Type":"ContainerStarted","Data":"1e3d92bc0f3a33367d3427d5de026cb0316be83210bbc798dfb4e563965bd4f3"} Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.639934 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.755883 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-ssh-key-openstack-edpm-ipam\") pod \"7fb04149-6828-4d2d-ae60-8425380b1219\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.755970 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-inventory\") pod \"7fb04149-6828-4d2d-ae60-8425380b1219\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.756115 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlcbw\" (UniqueName: \"kubernetes.io/projected/7fb04149-6828-4d2d-ae60-8425380b1219-kube-api-access-tlcbw\") pod \"7fb04149-6828-4d2d-ae60-8425380b1219\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.763910 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb04149-6828-4d2d-ae60-8425380b1219-kube-api-access-tlcbw" (OuterVolumeSpecName: "kube-api-access-tlcbw") pod "7fb04149-6828-4d2d-ae60-8425380b1219" (UID: "7fb04149-6828-4d2d-ae60-8425380b1219"). InnerVolumeSpecName "kube-api-access-tlcbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.800874 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-inventory" (OuterVolumeSpecName: "inventory") pod "7fb04149-6828-4d2d-ae60-8425380b1219" (UID: "7fb04149-6828-4d2d-ae60-8425380b1219"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.828845 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7fb04149-6828-4d2d-ae60-8425380b1219" (UID: "7fb04149-6828-4d2d-ae60-8425380b1219"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.860498 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlcbw\" (UniqueName: \"kubernetes.io/projected/7fb04149-6828-4d2d-ae60-8425380b1219-kube-api-access-tlcbw\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.860538 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.860548 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.045417 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" event={"ID":"7fb04149-6828-4d2d-ae60-8425380b1219","Type":"ContainerDied","Data":"58cceb2cb86d3dda5cda0566d29655beefb3fa62450f22144f0feb19c99a827d"} Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.045850 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58cceb2cb86d3dda5cda0566d29655beefb3fa62450f22144f0feb19c99a827d" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.045437 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.047762 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff","Type":"ContainerStarted","Data":"d73e16baeab42d6fda390e8667edb0a9d7a1ea9ae9ffd1f3415026da472b7770"} Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.083125 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.498027297 podStartE2EDuration="6.083106004s" podCreationTimestamp="2026-03-07 08:18:31 +0000 UTC" firstStartedPulling="2026-03-07 08:18:32.210571264 +0000 UTC m=+1769.119737739" lastFinishedPulling="2026-03-07 08:18:35.795649961 +0000 UTC m=+1772.704816446" observedRunningTime="2026-03-07 08:18:37.067548457 +0000 UTC m=+1773.976714942" watchObservedRunningTime="2026-03-07 08:18:37.083106004 +0000 UTC m=+1773.992272479" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.160198 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m"] Mar 07 08:18:37 crc kubenswrapper[4761]: E0307 08:18:37.160844 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb04149-6828-4d2d-ae60-8425380b1219" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.160867 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb04149-6828-4d2d-ae60-8425380b1219" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.161137 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fb04149-6828-4d2d-ae60-8425380b1219" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.162062 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.180805 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.181108 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.181149 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.181249 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.206943 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m"] Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.271359 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.271448 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.271566 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kzxl\" (UniqueName: \"kubernetes.io/projected/27f66d5b-c359-480d-9bb8-02447507d3ca-kube-api-access-7kzxl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.271612 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.374271 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.374348 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.374447 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kzxl\" (UniqueName: \"kubernetes.io/projected/27f66d5b-c359-480d-9bb8-02447507d3ca-kube-api-access-7kzxl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.374675 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.380420 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.389389 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.389853 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.392184 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kzxl\" (UniqueName: \"kubernetes.io/projected/27f66d5b-c359-480d-9bb8-02447507d3ca-kube-api-access-7kzxl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.486133 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:38 crc kubenswrapper[4761]: W0307 08:18:38.131960 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27f66d5b_c359_480d_9bb8_02447507d3ca.slice/crio-4d1f47a95b5bd7743d5c43cddb0d2812b995552636246a3bcf4c7186386ea5fe WatchSource:0}: Error finding container 4d1f47a95b5bd7743d5c43cddb0d2812b995552636246a3bcf4c7186386ea5fe: Status 404 returned error can't find the container with id 4d1f47a95b5bd7743d5c43cddb0d2812b995552636246a3bcf4c7186386ea5fe Mar 07 08:18:38 crc kubenswrapper[4761]: I0307 08:18:38.138413 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m"] Mar 07 08:18:39 crc kubenswrapper[4761]: I0307 08:18:39.071960 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" event={"ID":"27f66d5b-c359-480d-9bb8-02447507d3ca","Type":"ContainerStarted","Data":"409bd61756789942b4acbc43b888dac4fc2e317ec8827dd92dcf87d2d790a713"} Mar 07 08:18:39 crc kubenswrapper[4761]: I0307 08:18:39.072298 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" event={"ID":"27f66d5b-c359-480d-9bb8-02447507d3ca","Type":"ContainerStarted","Data":"4d1f47a95b5bd7743d5c43cddb0d2812b995552636246a3bcf4c7186386ea5fe"} Mar 07 08:18:39 crc kubenswrapper[4761]: I0307 08:18:39.094423 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" podStartSLOduration=1.660351764 podStartE2EDuration="2.094405937s" podCreationTimestamp="2026-03-07 08:18:37 +0000 UTC" firstStartedPulling="2026-03-07 08:18:38.136074473 +0000 UTC m=+1775.045240948" lastFinishedPulling="2026-03-07 08:18:38.570128646 +0000 UTC m=+1775.479295121" observedRunningTime="2026-03-07 08:18:39.085971692 +0000 UTC m=+1775.995138167" watchObservedRunningTime="2026-03-07 08:18:39.094405937 +0000 UTC m=+1776.003572412" Mar 07 08:18:43 crc kubenswrapper[4761]: I0307 08:18:43.768332 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:18:43 crc kubenswrapper[4761]: I0307 08:18:43.769077 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:18:43 crc kubenswrapper[4761]: I0307 08:18:43.769163 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:18:43 crc kubenswrapper[4761]: I0307 08:18:43.770299 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:18:43 crc kubenswrapper[4761]: I0307 08:18:43.770389 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" gracePeriod=600 Mar 07 08:18:43 crc kubenswrapper[4761]: E0307 08:18:43.897867 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:18:44 crc kubenswrapper[4761]: I0307 08:18:44.126539 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" exitCode=0 Mar 07 08:18:44 crc kubenswrapper[4761]: I0307 08:18:44.126627 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806"} Mar 07 08:18:44 crc kubenswrapper[4761]: I0307 08:18:44.126867 4761 scope.go:117] "RemoveContainer" containerID="884da56902d61ce2a23842311611c1facb0e638b212880b855a9c7825ef51b45" Mar 07 08:18:44 crc kubenswrapper[4761]: I0307 08:18:44.128783 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:18:44 crc kubenswrapper[4761]: E0307 08:18:44.129439 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:18:54 crc kubenswrapper[4761]: I0307 08:18:54.248478 4761 generic.go:334] "Generic (PLEG): container finished" podID="4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc" containerID="30f27239d9d86a6951cb86e64cf67a28355de987e9b880514da23ead2161865e" exitCode=0 Mar 07 08:18:54 crc kubenswrapper[4761]: I0307 08:18:54.248565 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc","Type":"ContainerDied","Data":"30f27239d9d86a6951cb86e64cf67a28355de987e9b880514da23ead2161865e"} Mar 07 08:18:55 crc kubenswrapper[4761]: I0307 08:18:55.260097 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc","Type":"ContainerStarted","Data":"d3dd5cdaee752ea71ae1b558522947c54cb2313bb9efb5cddcb2169fa6453777"} Mar 07 08:18:55 crc kubenswrapper[4761]: I0307 08:18:55.260761 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Mar 07 08:18:55 crc kubenswrapper[4761]: I0307 08:18:55.288541 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.288523827 podStartE2EDuration="37.288523827s" podCreationTimestamp="2026-03-07 08:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:18:55.282182043 +0000 UTC m=+1792.191348538" watchObservedRunningTime="2026-03-07 08:18:55.288523827 +0000 UTC m=+1792.197690302" Mar 07 08:18:58 crc kubenswrapper[4761]: I0307 08:18:58.705775 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:18:58 crc kubenswrapper[4761]: E0307 08:18:58.706653 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:19:08 crc kubenswrapper[4761]: I0307 08:19:08.823883 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Mar 07 08:19:08 crc kubenswrapper[4761]: I0307 08:19:08.931840 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:19:09 crc kubenswrapper[4761]: I0307 08:19:09.706442 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:19:09 crc kubenswrapper[4761]: E0307 08:19:09.707089 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:19:12 crc kubenswrapper[4761]: I0307 08:19:12.936049 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="49dec540-e872-432f-bffe-1b0380ac0082" containerName="rabbitmq" containerID="cri-o://5888ac18ecadfb4983a3dc774d889f0a46c93806c8b965f02ef1b4898fdb22d2" gracePeriod=604796 Mar 07 08:19:14 crc kubenswrapper[4761]: I0307 08:19:14.900533 4761 scope.go:117] "RemoveContainer" containerID="560fe328c871c1fd36e317523f8415d6e1437c8d786e81f4b10c902c8f0a9573" Mar 07 08:19:15 crc kubenswrapper[4761]: I0307 08:19:15.199658 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="49dec540-e872-432f-bffe-1b0380ac0082" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.585796 4761 generic.go:334] "Generic (PLEG): container finished" podID="49dec540-e872-432f-bffe-1b0380ac0082" containerID="5888ac18ecadfb4983a3dc774d889f0a46c93806c8b965f02ef1b4898fdb22d2" exitCode=0 Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.585888 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49dec540-e872-432f-bffe-1b0380ac0082","Type":"ContainerDied","Data":"5888ac18ecadfb4983a3dc774d889f0a46c93806c8b965f02ef1b4898fdb22d2"} Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.786581 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.894697 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76f82\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-kube-api-access-76f82\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.894802 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-plugins\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.894936 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-confd\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.894981 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49dec540-e872-432f-bffe-1b0380ac0082-pod-info\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895018 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49dec540-e872-432f-bffe-1b0380ac0082-erlang-cookie-secret\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895052 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-config-data\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895071 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-erlang-cookie\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895123 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-server-conf\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895193 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-plugins-conf\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895514 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895801 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895827 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-tls\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.896075 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.896263 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.897139 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.897171 4761 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.897318 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.907863 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49dec540-e872-432f-bffe-1b0380ac0082-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.908138 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.908446 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/49dec540-e872-432f-bffe-1b0380ac0082-pod-info" (OuterVolumeSpecName: "pod-info") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.924411 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-kube-api-access-76f82" (OuterVolumeSpecName: "kube-api-access-76f82") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "kube-api-access-76f82". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.955183 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-config-data" (OuterVolumeSpecName: "config-data") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.957131 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d" (OuterVolumeSpecName: "persistence") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.974780 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-server-conf" (OuterVolumeSpecName: "server-conf") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.000348 4761 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49dec540-e872-432f-bffe-1b0380ac0082-pod-info\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.000407 4761 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49dec540-e872-432f-bffe-1b0380ac0082-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.000418 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.000427 4761 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-server-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.000435 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.000469 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") on node \"crc\" " Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.000483 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76f82\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-kube-api-access-76f82\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.038325 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.038496 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d") on node "crc" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.046056 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.102257 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.102470 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.602900 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49dec540-e872-432f-bffe-1b0380ac0082","Type":"ContainerDied","Data":"256a7517664626ead142d4d5dec2607a661a8459a086b7a664b53dd69f9b3663"} Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.603198 4761 scope.go:117] "RemoveContainer" containerID="5888ac18ecadfb4983a3dc774d889f0a46c93806c8b965f02ef1b4898fdb22d2" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.603372 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.662076 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.680753 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.698494 4761 scope.go:117] "RemoveContainer" containerID="9ee7ce9221a6be795722d6e5f52ae5f0c03c8d8b610024b67cfd95e5744149c2" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.709933 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:19:20 crc kubenswrapper[4761]: E0307 08:19:20.710573 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.711704 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:19:20 crc kubenswrapper[4761]: E0307 08:19:20.712453 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49dec540-e872-432f-bffe-1b0380ac0082" containerName="setup-container" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.712478 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="49dec540-e872-432f-bffe-1b0380ac0082" containerName="setup-container" Mar 07 08:19:20 crc kubenswrapper[4761]: E0307 08:19:20.712504 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49dec540-e872-432f-bffe-1b0380ac0082" containerName="rabbitmq" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.712514 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="49dec540-e872-432f-bffe-1b0380ac0082" containerName="rabbitmq" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.712866 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="49dec540-e872-432f-bffe-1b0380ac0082" containerName="rabbitmq" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.714532 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.756521 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828552 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b857c4b2-5d07-434c-aeb0-7189b087b650-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828650 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828700 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828747 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b857c4b2-5d07-434c-aeb0-7189b087b650-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828791 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828846 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-config-data\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828882 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828901 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828933 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828952 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828967 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66c2h\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-kube-api-access-66c2h\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.930784 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.930837 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66c2h\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-kube-api-access-66c2h\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.930976 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b857c4b2-5d07-434c-aeb0-7189b087b650-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931036 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931087 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931122 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b857c4b2-5d07-434c-aeb0-7189b087b650-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931165 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931249 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-config-data\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931291 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931319 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931364 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.932065 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931368 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.933127 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-config-data\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.933436 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.934043 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.936113 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b857c4b2-5d07-434c-aeb0-7189b087b650-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.939314 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b857c4b2-5d07-434c-aeb0-7189b087b650-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.944460 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.944471 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.945338 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.945390 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0e63d5dfd4825d4df4a1fd6592e0e906350781786a587f415bb4549b05f1b05e/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.954468 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66c2h\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-kube-api-access-66c2h\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:21 crc kubenswrapper[4761]: I0307 08:19:21.047523 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:21 crc kubenswrapper[4761]: I0307 08:19:21.055688 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:19:21 crc kubenswrapper[4761]: I0307 08:19:21.584208 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:19:21 crc kubenswrapper[4761]: I0307 08:19:21.618893 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b857c4b2-5d07-434c-aeb0-7189b087b650","Type":"ContainerStarted","Data":"88756d1cf7abf5fc300575e88ab8e8a1828685b0d175c0014cbf0a976a0d9acb"} Mar 07 08:19:21 crc kubenswrapper[4761]: I0307 08:19:21.724194 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49dec540-e872-432f-bffe-1b0380ac0082" path="/var/lib/kubelet/pods/49dec540-e872-432f-bffe-1b0380ac0082/volumes" Mar 07 08:19:24 crc kubenswrapper[4761]: I0307 08:19:24.680049 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b857c4b2-5d07-434c-aeb0-7189b087b650","Type":"ContainerStarted","Data":"8f6abe93141e09e9d56a1e9603a0c79d2426c54bc43b331c2149cc490e1112c0"} Mar 07 08:19:35 crc kubenswrapper[4761]: I0307 08:19:35.706517 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:19:35 crc kubenswrapper[4761]: E0307 08:19:35.707530 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:19:50 crc kubenswrapper[4761]: I0307 08:19:50.706165 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:19:50 crc kubenswrapper[4761]: E0307 08:19:50.707790 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:19:57 crc kubenswrapper[4761]: I0307 08:19:57.156351 4761 generic.go:334] "Generic (PLEG): container finished" podID="b857c4b2-5d07-434c-aeb0-7189b087b650" containerID="8f6abe93141e09e9d56a1e9603a0c79d2426c54bc43b331c2149cc490e1112c0" exitCode=0 Mar 07 08:19:57 crc kubenswrapper[4761]: I0307 08:19:57.156432 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b857c4b2-5d07-434c-aeb0-7189b087b650","Type":"ContainerDied","Data":"8f6abe93141e09e9d56a1e9603a0c79d2426c54bc43b331c2149cc490e1112c0"} Mar 07 08:19:58 crc kubenswrapper[4761]: I0307 08:19:58.175079 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b857c4b2-5d07-434c-aeb0-7189b087b650","Type":"ContainerStarted","Data":"207a780f8438d838a13fb7b06ec5d66050d7676bd97ce65a7f280d53df3529ab"} Mar 07 08:19:58 crc kubenswrapper[4761]: I0307 08:19:58.175890 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 07 08:19:58 crc kubenswrapper[4761]: I0307 08:19:58.207477 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.207453822 podStartE2EDuration="38.207453822s" podCreationTimestamp="2026-03-07 08:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:19:58.197338817 +0000 UTC m=+1855.106505302" watchObservedRunningTime="2026-03-07 08:19:58.207453822 +0000 UTC m=+1855.116620287" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.165466 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547860-d6dm6"] Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.167776 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.169934 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.170983 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.178373 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.187970 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547860-d6dm6"] Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.211925 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kctt\" (UniqueName: \"kubernetes.io/projected/ffa0ab32-8233-4b87-b335-eb94efbdfb06-kube-api-access-5kctt\") pod \"auto-csr-approver-29547860-d6dm6\" (UID: \"ffa0ab32-8233-4b87-b335-eb94efbdfb06\") " pod="openshift-infra/auto-csr-approver-29547860-d6dm6" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.314944 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kctt\" (UniqueName: \"kubernetes.io/projected/ffa0ab32-8233-4b87-b335-eb94efbdfb06-kube-api-access-5kctt\") pod \"auto-csr-approver-29547860-d6dm6\" (UID: \"ffa0ab32-8233-4b87-b335-eb94efbdfb06\") " pod="openshift-infra/auto-csr-approver-29547860-d6dm6" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.333093 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kctt\" (UniqueName: \"kubernetes.io/projected/ffa0ab32-8233-4b87-b335-eb94efbdfb06-kube-api-access-5kctt\") pod \"auto-csr-approver-29547860-d6dm6\" (UID: \"ffa0ab32-8233-4b87-b335-eb94efbdfb06\") " pod="openshift-infra/auto-csr-approver-29547860-d6dm6" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.508254 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" Mar 07 08:20:01 crc kubenswrapper[4761]: I0307 08:20:01.012773 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547860-d6dm6"] Mar 07 08:20:01 crc kubenswrapper[4761]: W0307 08:20:01.013194 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffa0ab32_8233_4b87_b335_eb94efbdfb06.slice/crio-084bdba642be98987cb39cc71c68356e037e047b4d81b8dbf52c018fa7a9a572 WatchSource:0}: Error finding container 084bdba642be98987cb39cc71c68356e037e047b4d81b8dbf52c018fa7a9a572: Status 404 returned error can't find the container with id 084bdba642be98987cb39cc71c68356e037e047b4d81b8dbf52c018fa7a9a572 Mar 07 08:20:01 crc kubenswrapper[4761]: I0307 08:20:01.220767 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" event={"ID":"ffa0ab32-8233-4b87-b335-eb94efbdfb06","Type":"ContainerStarted","Data":"084bdba642be98987cb39cc71c68356e037e047b4d81b8dbf52c018fa7a9a572"} Mar 07 08:20:02 crc kubenswrapper[4761]: I0307 08:20:02.706061 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:20:02 crc kubenswrapper[4761]: E0307 08:20:02.715948 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:20:03 crc kubenswrapper[4761]: I0307 08:20:03.244167 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" event={"ID":"ffa0ab32-8233-4b87-b335-eb94efbdfb06","Type":"ContainerStarted","Data":"10078fb6c1a8e617ee923e1cee93a96c671ff14c92c3f2b495d63428c1465950"} Mar 07 08:20:03 crc kubenswrapper[4761]: I0307 08:20:03.265834 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" podStartSLOduration=1.496173744 podStartE2EDuration="3.265816437s" podCreationTimestamp="2026-03-07 08:20:00 +0000 UTC" firstStartedPulling="2026-03-07 08:20:01.016661848 +0000 UTC m=+1857.925828323" lastFinishedPulling="2026-03-07 08:20:02.786304541 +0000 UTC m=+1859.695471016" observedRunningTime="2026-03-07 08:20:03.262130997 +0000 UTC m=+1860.171297472" watchObservedRunningTime="2026-03-07 08:20:03.265816437 +0000 UTC m=+1860.174982912" Mar 07 08:20:04 crc kubenswrapper[4761]: I0307 08:20:04.256780 4761 generic.go:334] "Generic (PLEG): container finished" podID="ffa0ab32-8233-4b87-b335-eb94efbdfb06" containerID="10078fb6c1a8e617ee923e1cee93a96c671ff14c92c3f2b495d63428c1465950" exitCode=0 Mar 07 08:20:04 crc kubenswrapper[4761]: I0307 08:20:04.257313 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" event={"ID":"ffa0ab32-8233-4b87-b335-eb94efbdfb06","Type":"ContainerDied","Data":"10078fb6c1a8e617ee923e1cee93a96c671ff14c92c3f2b495d63428c1465950"} Mar 07 08:20:05 crc kubenswrapper[4761]: I0307 08:20:05.835250 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" Mar 07 08:20:05 crc kubenswrapper[4761]: I0307 08:20:05.876781 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kctt\" (UniqueName: \"kubernetes.io/projected/ffa0ab32-8233-4b87-b335-eb94efbdfb06-kube-api-access-5kctt\") pod \"ffa0ab32-8233-4b87-b335-eb94efbdfb06\" (UID: \"ffa0ab32-8233-4b87-b335-eb94efbdfb06\") " Mar 07 08:20:05 crc kubenswrapper[4761]: I0307 08:20:05.883467 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffa0ab32-8233-4b87-b335-eb94efbdfb06-kube-api-access-5kctt" (OuterVolumeSpecName: "kube-api-access-5kctt") pod "ffa0ab32-8233-4b87-b335-eb94efbdfb06" (UID: "ffa0ab32-8233-4b87-b335-eb94efbdfb06"). InnerVolumeSpecName "kube-api-access-5kctt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:20:05 crc kubenswrapper[4761]: I0307 08:20:05.980534 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kctt\" (UniqueName: \"kubernetes.io/projected/ffa0ab32-8233-4b87-b335-eb94efbdfb06-kube-api-access-5kctt\") on node \"crc\" DevicePath \"\"" Mar 07 08:20:06 crc kubenswrapper[4761]: I0307 08:20:06.282315 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" event={"ID":"ffa0ab32-8233-4b87-b335-eb94efbdfb06","Type":"ContainerDied","Data":"084bdba642be98987cb39cc71c68356e037e047b4d81b8dbf52c018fa7a9a572"} Mar 07 08:20:06 crc kubenswrapper[4761]: I0307 08:20:06.282354 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="084bdba642be98987cb39cc71c68356e037e047b4d81b8dbf52c018fa7a9a572" Mar 07 08:20:06 crc kubenswrapper[4761]: I0307 08:20:06.282418 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" Mar 07 08:20:06 crc kubenswrapper[4761]: I0307 08:20:06.343517 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547854-b54w4"] Mar 07 08:20:06 crc kubenswrapper[4761]: I0307 08:20:06.359535 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547854-b54w4"] Mar 07 08:20:07 crc kubenswrapper[4761]: I0307 08:20:07.724782 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa559f07-f757-48aa-91d6-8408654be6fb" path="/var/lib/kubelet/pods/fa559f07-f757-48aa-91d6-8408654be6fb/volumes" Mar 07 08:20:11 crc kubenswrapper[4761]: I0307 08:20:11.059992 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 07 08:20:14 crc kubenswrapper[4761]: I0307 08:20:14.707400 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:20:14 crc kubenswrapper[4761]: E0307 08:20:14.708596 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:20:15 crc kubenswrapper[4761]: I0307 08:20:15.041749 4761 scope.go:117] "RemoveContainer" containerID="130936491ac0d66e8bc5863e526f0ce24165cc3492d527d7ec2236bfdce93f7a" Mar 07 08:20:15 crc kubenswrapper[4761]: I0307 08:20:15.091124 4761 scope.go:117] "RemoveContainer" containerID="eadae3021f65aa1e1112361e2bcf5fc4f2eda7c5b0d47eff67c1e186e5afd8b1" Mar 07 08:20:15 crc kubenswrapper[4761]: I0307 08:20:15.182340 4761 scope.go:117] "RemoveContainer" containerID="d4664c58f260536a81211c969a35f89ac9977c97d2b99db0a4bb205c039801d8" Mar 07 08:20:28 crc kubenswrapper[4761]: I0307 08:20:28.706181 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:20:28 crc kubenswrapper[4761]: E0307 08:20:28.707192 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.329855 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fmzl6"] Mar 07 08:20:37 crc kubenswrapper[4761]: E0307 08:20:37.331346 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa0ab32-8233-4b87-b335-eb94efbdfb06" containerName="oc" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.331371 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa0ab32-8233-4b87-b335-eb94efbdfb06" containerName="oc" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.331867 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffa0ab32-8233-4b87-b335-eb94efbdfb06" containerName="oc" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.335197 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.343052 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmzl6"] Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.473254 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-catalog-content\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.473340 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5dpd\" (UniqueName: \"kubernetes.io/projected/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-kube-api-access-v5dpd\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.473410 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-utilities\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.576055 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-catalog-content\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.576477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5dpd\" (UniqueName: \"kubernetes.io/projected/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-kube-api-access-v5dpd\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.576576 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-catalog-content\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.576593 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-utilities\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.577101 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-utilities\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.597260 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5dpd\" (UniqueName: \"kubernetes.io/projected/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-kube-api-access-v5dpd\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.672406 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:38 crc kubenswrapper[4761]: I0307 08:20:38.187610 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmzl6"] Mar 07 08:20:38 crc kubenswrapper[4761]: I0307 08:20:38.698540 4761 generic.go:334] "Generic (PLEG): container finished" podID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerID="86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3" exitCode=0 Mar 07 08:20:38 crc kubenswrapper[4761]: I0307 08:20:38.698628 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmzl6" event={"ID":"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2","Type":"ContainerDied","Data":"86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3"} Mar 07 08:20:38 crc kubenswrapper[4761]: I0307 08:20:38.698875 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmzl6" event={"ID":"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2","Type":"ContainerStarted","Data":"3b44f33081a08db07de70fe8633b85b2358b39e49ca72c0790ae9670d16113c2"} Mar 07 08:20:39 crc kubenswrapper[4761]: I0307 08:20:39.726902 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmzl6" event={"ID":"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2","Type":"ContainerStarted","Data":"54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da"} Mar 07 08:20:40 crc kubenswrapper[4761]: I0307 08:20:40.732665 4761 generic.go:334] "Generic (PLEG): container finished" podID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerID="54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da" exitCode=0 Mar 07 08:20:40 crc kubenswrapper[4761]: I0307 08:20:40.732919 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmzl6" event={"ID":"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2","Type":"ContainerDied","Data":"54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da"} Mar 07 08:20:41 crc kubenswrapper[4761]: I0307 08:20:41.761236 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmzl6" event={"ID":"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2","Type":"ContainerStarted","Data":"4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d"} Mar 07 08:20:41 crc kubenswrapper[4761]: I0307 08:20:41.798234 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fmzl6" podStartSLOduration=2.3002590769999998 podStartE2EDuration="4.798210758s" podCreationTimestamp="2026-03-07 08:20:37 +0000 UTC" firstStartedPulling="2026-03-07 08:20:38.701185553 +0000 UTC m=+1895.610352028" lastFinishedPulling="2026-03-07 08:20:41.199137194 +0000 UTC m=+1898.108303709" observedRunningTime="2026-03-07 08:20:41.79005182 +0000 UTC m=+1898.699218325" watchObservedRunningTime="2026-03-07 08:20:41.798210758 +0000 UTC m=+1898.707377243" Mar 07 08:20:42 crc kubenswrapper[4761]: I0307 08:20:42.706406 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:20:42 crc kubenswrapper[4761]: E0307 08:20:42.707457 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:20:43 crc kubenswrapper[4761]: I0307 08:20:43.166509 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5b43-account-create-update-jpq6b"] Mar 07 08:20:43 crc kubenswrapper[4761]: I0307 08:20:43.181706 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5b43-account-create-update-jpq6b"] Mar 07 08:20:43 crc kubenswrapper[4761]: I0307 08:20:43.725123 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab06ca00-a8f7-40a5-a332-b00fc1b4de8b" path="/var/lib/kubelet/pods/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b/volumes" Mar 07 08:20:44 crc kubenswrapper[4761]: I0307 08:20:44.036382 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-458dc"] Mar 07 08:20:44 crc kubenswrapper[4761]: I0307 08:20:44.052062 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-557c-account-create-update-jtvjg"] Mar 07 08:20:44 crc kubenswrapper[4761]: I0307 08:20:44.065019 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-cv77d"] Mar 07 08:20:44 crc kubenswrapper[4761]: I0307 08:20:44.076273 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-458dc"] Mar 07 08:20:44 crc kubenswrapper[4761]: I0307 08:20:44.088089 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-557c-account-create-update-jtvjg"] Mar 07 08:20:44 crc kubenswrapper[4761]: I0307 08:20:44.100537 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-cv77d"] Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.037120 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-ee06-account-create-update-s6d4f"] Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.084204 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-b9fmh"] Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.103781 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-ee06-account-create-update-s6d4f"] Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.117796 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-b9fmh"] Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.726566 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c13d8a-a25a-419e-9267-6894a86897cc" path="/var/lib/kubelet/pods/70c13d8a-a25a-419e-9267-6894a86897cc/volumes" Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.728324 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0" path="/var/lib/kubelet/pods/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0/volumes" Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.731765 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ecdc2ad-5812-4bb2-a6ea-8659b3993985" path="/var/lib/kubelet/pods/9ecdc2ad-5812-4bb2-a6ea-8659b3993985/volumes" Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.733400 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b12971f6-3d67-4225-beab-46d9d3505ae1" path="/var/lib/kubelet/pods/b12971f6-3d67-4225-beab-46d9d3505ae1/volumes" Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.735623 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc4048ba-7b5a-48ab-b609-21cc5598d56c" path="/var/lib/kubelet/pods/dc4048ba-7b5a-48ab-b609-21cc5598d56c/volumes" Mar 07 08:20:47 crc kubenswrapper[4761]: I0307 08:20:47.673144 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:47 crc kubenswrapper[4761]: I0307 08:20:47.673482 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:47 crc kubenswrapper[4761]: I0307 08:20:47.730162 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:47 crc kubenswrapper[4761]: I0307 08:20:47.951210 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:48 crc kubenswrapper[4761]: I0307 08:20:48.011399 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmzl6"] Mar 07 08:20:49 crc kubenswrapper[4761]: I0307 08:20:49.879332 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fmzl6" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="registry-server" containerID="cri-o://4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d" gracePeriod=2 Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.496032 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.648647 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-utilities\") pod \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.648782 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5dpd\" (UniqueName: \"kubernetes.io/projected/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-kube-api-access-v5dpd\") pod \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.649004 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-catalog-content\") pod \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.649565 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-utilities" (OuterVolumeSpecName: "utilities") pod "ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" (UID: "ed80f26b-74e1-49e0-a02b-4d1c25e16ff2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.649697 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.668737 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-kube-api-access-v5dpd" (OuterVolumeSpecName: "kube-api-access-v5dpd") pod "ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" (UID: "ed80f26b-74e1-49e0-a02b-4d1c25e16ff2"). InnerVolumeSpecName "kube-api-access-v5dpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.688517 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" (UID: "ed80f26b-74e1-49e0-a02b-4d1c25e16ff2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.752466 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5dpd\" (UniqueName: \"kubernetes.io/projected/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-kube-api-access-v5dpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.752513 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.895276 4761 generic.go:334] "Generic (PLEG): container finished" podID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerID="4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d" exitCode=0 Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.895321 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmzl6" event={"ID":"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2","Type":"ContainerDied","Data":"4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d"} Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.895355 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmzl6" event={"ID":"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2","Type":"ContainerDied","Data":"3b44f33081a08db07de70fe8633b85b2358b39e49ca72c0790ae9670d16113c2"} Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.895367 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.895374 4761 scope.go:117] "RemoveContainer" containerID="4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.934053 4761 scope.go:117] "RemoveContainer" containerID="54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.939664 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmzl6"] Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.954664 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmzl6"] Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.963944 4761 scope.go:117] "RemoveContainer" containerID="86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3" Mar 07 08:20:51 crc kubenswrapper[4761]: I0307 08:20:51.042366 4761 scope.go:117] "RemoveContainer" containerID="4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d" Mar 07 08:20:51 crc kubenswrapper[4761]: E0307 08:20:51.043544 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d\": container with ID starting with 4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d not found: ID does not exist" containerID="4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d" Mar 07 08:20:51 crc kubenswrapper[4761]: I0307 08:20:51.043589 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d"} err="failed to get container status \"4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d\": rpc error: code = NotFound desc = could not find container \"4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d\": container with ID starting with 4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d not found: ID does not exist" Mar 07 08:20:51 crc kubenswrapper[4761]: I0307 08:20:51.043616 4761 scope.go:117] "RemoveContainer" containerID="54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da" Mar 07 08:20:51 crc kubenswrapper[4761]: E0307 08:20:51.044154 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da\": container with ID starting with 54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da not found: ID does not exist" containerID="54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da" Mar 07 08:20:51 crc kubenswrapper[4761]: I0307 08:20:51.044224 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da"} err="failed to get container status \"54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da\": rpc error: code = NotFound desc = could not find container \"54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da\": container with ID starting with 54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da not found: ID does not exist" Mar 07 08:20:51 crc kubenswrapper[4761]: I0307 08:20:51.044267 4761 scope.go:117] "RemoveContainer" containerID="86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3" Mar 07 08:20:51 crc kubenswrapper[4761]: E0307 08:20:51.044560 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3\": container with ID starting with 86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3 not found: ID does not exist" containerID="86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3" Mar 07 08:20:51 crc kubenswrapper[4761]: I0307 08:20:51.044582 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3"} err="failed to get container status \"86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3\": rpc error: code = NotFound desc = could not find container \"86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3\": container with ID starting with 86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3 not found: ID does not exist" Mar 07 08:20:51 crc kubenswrapper[4761]: I0307 08:20:51.728576 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" path="/var/lib/kubelet/pods/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2/volumes" Mar 07 08:20:53 crc kubenswrapper[4761]: I0307 08:20:53.031040 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-tz9rv"] Mar 07 08:20:53 crc kubenswrapper[4761]: I0307 08:20:53.071343 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-tz9rv"] Mar 07 08:20:53 crc kubenswrapper[4761]: I0307 08:20:53.726109 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c" path="/var/lib/kubelet/pods/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c/volumes" Mar 07 08:20:54 crc kubenswrapper[4761]: I0307 08:20:54.057907 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a970-account-create-update-pkxzm"] Mar 07 08:20:54 crc kubenswrapper[4761]: I0307 08:20:54.074334 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z"] Mar 07 08:20:54 crc kubenswrapper[4761]: I0307 08:20:54.092401 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z"] Mar 07 08:20:54 crc kubenswrapper[4761]: I0307 08:20:54.104187 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a970-account-create-update-pkxzm"] Mar 07 08:20:55 crc kubenswrapper[4761]: I0307 08:20:55.036366 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-49ec-account-create-update-257w6"] Mar 07 08:20:55 crc kubenswrapper[4761]: I0307 08:20:55.047554 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-49ec-account-create-update-257w6"] Mar 07 08:20:55 crc kubenswrapper[4761]: I0307 08:20:55.725001 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="042bb2b8-9493-439c-85e3-bb2766db2135" path="/var/lib/kubelet/pods/042bb2b8-9493-439c-85e3-bb2766db2135/volumes" Mar 07 08:20:55 crc kubenswrapper[4761]: I0307 08:20:55.727377 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eaf98b6-b097-4cbe-9815-835cd72b2616" path="/var/lib/kubelet/pods/9eaf98b6-b097-4cbe-9815-835cd72b2616/volumes" Mar 07 08:20:55 crc kubenswrapper[4761]: I0307 08:20:55.728826 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1946466-f406-4073-96f8-cc6e66148293" path="/var/lib/kubelet/pods/c1946466-f406-4073-96f8-cc6e66148293/volumes" Mar 07 08:20:56 crc kubenswrapper[4761]: I0307 08:20:56.705875 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:20:56 crc kubenswrapper[4761]: E0307 08:20:56.706239 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:21:10 crc kubenswrapper[4761]: I0307 08:21:10.706315 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:21:10 crc kubenswrapper[4761]: E0307 08:21:10.707485 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.325560 4761 scope.go:117] "RemoveContainer" containerID="7e5ba0bde8469cf1aa8078524709ff2366dd55b3bda6ff4b838052755b33fd24" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.371054 4761 scope.go:117] "RemoveContainer" containerID="b97d97e3a4c4f2472de35f79e1c8d14798b00f3965cb5ecc889970e8b120eb9c" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.438315 4761 scope.go:117] "RemoveContainer" containerID="ee98daeed4689551f2d9b8f315dc5f2150a8e0d8bb1624db07ae27201527b436" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.506904 4761 scope.go:117] "RemoveContainer" containerID="4e7d19ecfc8d3734356a0832721b8ff789bad3f1d623fbc7262aab81f59906fc" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.549028 4761 scope.go:117] "RemoveContainer" containerID="62efc0d0d775ac67bd7a9ef68d8de2bc66a98a78db91f3f6ce87d93e2f2a1663" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.586750 4761 scope.go:117] "RemoveContainer" containerID="d38fa90028b86d72ed68d38df6e216cf503a5d579c9cddea71be1aba3c5e65a2" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.614638 4761 scope.go:117] "RemoveContainer" containerID="c15a8f34b90748d4123aa8305d977da985ad4ee833bd6258f7893d25a0f01981" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.682808 4761 scope.go:117] "RemoveContainer" containerID="b269857bff81c96eb8751012ebe23820ca1ecd1ca87d5b49700f96f3184ec666" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.708164 4761 scope.go:117] "RemoveContainer" containerID="178b4b5a5b5a97c98c5a01eeef66b7b962e2bb8a1f3fd5c70b486b42f553a81f" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.777894 4761 scope.go:117] "RemoveContainer" containerID="9fbf4f9d40a0ec24b8dea09bb5d46ee8c49f0582f2fa196ad53b3fa0be0e0a4f" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.816385 4761 scope.go:117] "RemoveContainer" containerID="69b4bf84dd39df6b7d9b398c110264cd806fbdc4859293bf644ef1767167f6e9" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.849077 4761 scope.go:117] "RemoveContainer" containerID="92d60cbd1931c0910d9a77a2b32fd62f51cb82efcb041fb0c916607c3418054a" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.887573 4761 scope.go:117] "RemoveContainer" containerID="10f5faab65c65733fae3ac0c1b8b365a3b145620a08013168d0ccf82c6a4bb89" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.915744 4761 scope.go:117] "RemoveContainer" containerID="604a9a3091641041b296f96b4f1d808f47de7c313c82fc9218ada4352b3da08b" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.944012 4761 scope.go:117] "RemoveContainer" containerID="24ce24b7ae154c50bbadc9b227eb94ce0080c6f5a420a8791dcddd59fe83f5fc" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.988829 4761 scope.go:117] "RemoveContainer" containerID="d0bdd15b2bc88eba2d15ee11a9ddb62245bc8b40ec0c2d87cdc1ae8f6b410cf0" Mar 07 08:21:16 crc kubenswrapper[4761]: I0307 08:21:16.011829 4761 scope.go:117] "RemoveContainer" containerID="9521ac8897cc5031589e3a97f98d6810344dcf9dfb4311f397e7415d00277014" Mar 07 08:21:16 crc kubenswrapper[4761]: I0307 08:21:16.051895 4761 scope.go:117] "RemoveContainer" containerID="73b7f5c1ad87980cb468fd2ea0a74afe21853e4cb686a20061f8344d21dbba9b" Mar 07 08:21:16 crc kubenswrapper[4761]: I0307 08:21:16.097205 4761 scope.go:117] "RemoveContainer" containerID="07338e0375850617de7a90d252dd69e08c516f72a7329bdb33d6d7250f0f8095" Mar 07 08:21:22 crc kubenswrapper[4761]: I0307 08:21:22.054043 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-pf6dj"] Mar 07 08:21:22 crc kubenswrapper[4761]: I0307 08:21:22.070560 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-pf6dj"] Mar 07 08:21:23 crc kubenswrapper[4761]: I0307 08:21:23.036883 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-wnw7q"] Mar 07 08:21:23 crc kubenswrapper[4761]: I0307 08:21:23.047751 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-hbnpl"] Mar 07 08:21:23 crc kubenswrapper[4761]: I0307 08:21:23.069588 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-wnw7q"] Mar 07 08:21:23 crc kubenswrapper[4761]: I0307 08:21:23.080583 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-hbnpl"] Mar 07 08:21:23 crc kubenswrapper[4761]: I0307 08:21:23.731909 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52ac8e30-44e2-48ba-8272-112bb012a7e2" path="/var/lib/kubelet/pods/52ac8e30-44e2-48ba-8272-112bb012a7e2/volumes" Mar 07 08:21:23 crc kubenswrapper[4761]: I0307 08:21:23.735973 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b" path="/var/lib/kubelet/pods/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b/volumes" Mar 07 08:21:23 crc kubenswrapper[4761]: I0307 08:21:23.740608 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92bbc752-8315-47e4-993a-db9de1da8c87" path="/var/lib/kubelet/pods/92bbc752-8315-47e4-993a-db9de1da8c87/volumes" Mar 07 08:21:25 crc kubenswrapper[4761]: I0307 08:21:25.706834 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:21:25 crc kubenswrapper[4761]: E0307 08:21:25.707744 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:21:29 crc kubenswrapper[4761]: I0307 08:21:29.047479 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mdw2w"] Mar 07 08:21:29 crc kubenswrapper[4761]: I0307 08:21:29.070114 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-17dd-account-create-update-fwfjn"] Mar 07 08:21:29 crc kubenswrapper[4761]: I0307 08:21:29.084167 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-17dd-account-create-update-fwfjn"] Mar 07 08:21:29 crc kubenswrapper[4761]: I0307 08:21:29.097339 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mdw2w"] Mar 07 08:21:29 crc kubenswrapper[4761]: I0307 08:21:29.728602 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b359be0-899b-479e-ac6c-1ed4422b7da8" path="/var/lib/kubelet/pods/6b359be0-899b-479e-ac6c-1ed4422b7da8/volumes" Mar 07 08:21:29 crc kubenswrapper[4761]: I0307 08:21:29.732251 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2f2f7f1-78f2-41ef-80a6-efa709f0c281" path="/var/lib/kubelet/pods/c2f2f7f1-78f2-41ef-80a6-efa709f0c281/volumes" Mar 07 08:21:30 crc kubenswrapper[4761]: I0307 08:21:30.037487 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-3014-account-create-update-gtc26"] Mar 07 08:21:30 crc kubenswrapper[4761]: I0307 08:21:30.051883 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-3014-account-create-update-gtc26"] Mar 07 08:21:31 crc kubenswrapper[4761]: I0307 08:21:31.459767 4761 generic.go:334] "Generic (PLEG): container finished" podID="27f66d5b-c359-480d-9bb8-02447507d3ca" containerID="409bd61756789942b4acbc43b888dac4fc2e317ec8827dd92dcf87d2d790a713" exitCode=0 Mar 07 08:21:31 crc kubenswrapper[4761]: I0307 08:21:31.459865 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" event={"ID":"27f66d5b-c359-480d-9bb8-02447507d3ca","Type":"ContainerDied","Data":"409bd61756789942b4acbc43b888dac4fc2e317ec8827dd92dcf87d2d790a713"} Mar 07 08:21:31 crc kubenswrapper[4761]: I0307 08:21:31.723898 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9894a0c-ae83-4f9b-96c5-4bac5772ad56" path="/var/lib/kubelet/pods/c9894a0c-ae83-4f9b-96c5-4bac5772ad56/volumes" Mar 07 08:21:32 crc kubenswrapper[4761]: I0307 08:21:32.046395 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xhpdg"] Mar 07 08:21:32 crc kubenswrapper[4761]: I0307 08:21:32.086485 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-eedb-account-create-update-wc6wq"] Mar 07 08:21:32 crc kubenswrapper[4761]: I0307 08:21:32.099579 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-736f-account-create-update-jjxjx"] Mar 07 08:21:32 crc kubenswrapper[4761]: I0307 08:21:32.111339 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-eedb-account-create-update-wc6wq"] Mar 07 08:21:32 crc kubenswrapper[4761]: I0307 08:21:32.120296 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-736f-account-create-update-jjxjx"] Mar 07 08:21:32 crc kubenswrapper[4761]: I0307 08:21:32.129273 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xhpdg"] Mar 07 08:21:32 crc kubenswrapper[4761]: I0307 08:21:32.981042 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.092454 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-inventory\") pod \"27f66d5b-c359-480d-9bb8-02447507d3ca\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.092609 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-bootstrap-combined-ca-bundle\") pod \"27f66d5b-c359-480d-9bb8-02447507d3ca\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.092714 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-ssh-key-openstack-edpm-ipam\") pod \"27f66d5b-c359-480d-9bb8-02447507d3ca\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.092832 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kzxl\" (UniqueName: \"kubernetes.io/projected/27f66d5b-c359-480d-9bb8-02447507d3ca-kube-api-access-7kzxl\") pod \"27f66d5b-c359-480d-9bb8-02447507d3ca\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.098164 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f66d5b-c359-480d-9bb8-02447507d3ca-kube-api-access-7kzxl" (OuterVolumeSpecName: "kube-api-access-7kzxl") pod "27f66d5b-c359-480d-9bb8-02447507d3ca" (UID: "27f66d5b-c359-480d-9bb8-02447507d3ca"). InnerVolumeSpecName "kube-api-access-7kzxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.098706 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "27f66d5b-c359-480d-9bb8-02447507d3ca" (UID: "27f66d5b-c359-480d-9bb8-02447507d3ca"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.145481 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-inventory" (OuterVolumeSpecName: "inventory") pod "27f66d5b-c359-480d-9bb8-02447507d3ca" (UID: "27f66d5b-c359-480d-9bb8-02447507d3ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.146876 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "27f66d5b-c359-480d-9bb8-02447507d3ca" (UID: "27f66d5b-c359-480d-9bb8-02447507d3ca"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.196376 4761 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.196411 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.196423 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kzxl\" (UniqueName: \"kubernetes.io/projected/27f66d5b-c359-480d-9bb8-02447507d3ca-kube-api-access-7kzxl\") on node \"crc\" DevicePath \"\"" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.196433 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.483007 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" event={"ID":"27f66d5b-c359-480d-9bb8-02447507d3ca","Type":"ContainerDied","Data":"4d1f47a95b5bd7743d5c43cddb0d2812b995552636246a3bcf4c7186386ea5fe"} Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.483041 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d1f47a95b5bd7743d5c43cddb0d2812b995552636246a3bcf4c7186386ea5fe" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.483105 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.585320 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5"] Mar 07 08:21:33 crc kubenswrapper[4761]: E0307 08:21:33.585869 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="registry-server" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.585888 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="registry-server" Mar 07 08:21:33 crc kubenswrapper[4761]: E0307 08:21:33.585909 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f66d5b-c359-480d-9bb8-02447507d3ca" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.585917 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f66d5b-c359-480d-9bb8-02447507d3ca" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 07 08:21:33 crc kubenswrapper[4761]: E0307 08:21:33.585936 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="extract-content" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.585943 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="extract-content" Mar 07 08:21:33 crc kubenswrapper[4761]: E0307 08:21:33.585961 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="extract-utilities" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.585967 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="extract-utilities" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.586183 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="registry-server" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.586216 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f66d5b-c359-480d-9bb8-02447507d3ca" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.587024 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.588732 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.589017 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.589228 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.591662 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.605734 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5"] Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.709565 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj47v\" (UniqueName: \"kubernetes.io/projected/1ee12ec5-76cf-4824-9882-d55c16a3c08e-kube-api-access-sj47v\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.709979 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.710209 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.721764 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47e8c767-31e1-4609-8c1f-b62577164637" path="/var/lib/kubelet/pods/47e8c767-31e1-4609-8c1f-b62577164637/volumes" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.724117 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="573aa590-eee5-4f25-80ba-8bcf0a712d6f" path="/var/lib/kubelet/pods/573aa590-eee5-4f25-80ba-8bcf0a712d6f/volumes" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.725692 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d5d960-90ad-4ca1-a874-6903a4d93d90" path="/var/lib/kubelet/pods/b4d5d960-90ad-4ca1-a874-6903a4d93d90/volumes" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.811871 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.812059 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj47v\" (UniqueName: \"kubernetes.io/projected/1ee12ec5-76cf-4824-9882-d55c16a3c08e-kube-api-access-sj47v\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.812313 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.817452 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.824601 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.829118 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj47v\" (UniqueName: \"kubernetes.io/projected/1ee12ec5-76cf-4824-9882-d55c16a3c08e-kube-api-access-sj47v\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.911584 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:34 crc kubenswrapper[4761]: I0307 08:21:34.471043 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5"] Mar 07 08:21:34 crc kubenswrapper[4761]: I0307 08:21:34.502066 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" event={"ID":"1ee12ec5-76cf-4824-9882-d55c16a3c08e","Type":"ContainerStarted","Data":"1b358518e6c476a57a727cf767061051d35b33836924847906a2d4472d3f3da9"} Mar 07 08:21:35 crc kubenswrapper[4761]: I0307 08:21:35.079016 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-tctqn"] Mar 07 08:21:35 crc kubenswrapper[4761]: I0307 08:21:35.092096 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-tctqn"] Mar 07 08:21:35 crc kubenswrapper[4761]: I0307 08:21:35.514351 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" event={"ID":"1ee12ec5-76cf-4824-9882-d55c16a3c08e","Type":"ContainerStarted","Data":"69b53d2d904bce2a677b915a6b2d44900d2544b881b943ea96f71ced29b52b11"} Mar 07 08:21:35 crc kubenswrapper[4761]: I0307 08:21:35.719143 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e98bf9-0ded-4a61-b436-1f652f69e599" path="/var/lib/kubelet/pods/15e98bf9-0ded-4a61-b436-1f652f69e599/volumes" Mar 07 08:21:36 crc kubenswrapper[4761]: I0307 08:21:36.707454 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:21:36 crc kubenswrapper[4761]: E0307 08:21:36.708008 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:21:38 crc kubenswrapper[4761]: I0307 08:21:38.062072 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" podStartSLOduration=4.560726088 podStartE2EDuration="5.062048302s" podCreationTimestamp="2026-03-07 08:21:33 +0000 UTC" firstStartedPulling="2026-03-07 08:21:34.487602923 +0000 UTC m=+1951.396769398" lastFinishedPulling="2026-03-07 08:21:34.988925127 +0000 UTC m=+1951.898091612" observedRunningTime="2026-03-07 08:21:35.539805872 +0000 UTC m=+1952.448972347" watchObservedRunningTime="2026-03-07 08:21:38.062048302 +0000 UTC m=+1954.971214787" Mar 07 08:21:38 crc kubenswrapper[4761]: I0307 08:21:38.069986 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-g9w2m"] Mar 07 08:21:38 crc kubenswrapper[4761]: I0307 08:21:38.084320 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-g9w2m"] Mar 07 08:21:39 crc kubenswrapper[4761]: I0307 08:21:39.728684 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a990e713-634f-47c4-acbe-980ed66d30fe" path="/var/lib/kubelet/pods/a990e713-634f-47c4-acbe-980ed66d30fe/volumes" Mar 07 08:21:50 crc kubenswrapper[4761]: I0307 08:21:50.706347 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:21:50 crc kubenswrapper[4761]: E0307 08:21:50.707708 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.145695 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547862-td4lg"] Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.148343 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547862-td4lg" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.150978 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.150994 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.151196 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.189330 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547862-td4lg"] Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.222000 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pnxh\" (UniqueName: \"kubernetes.io/projected/256bcb0e-2dae-4547-a0d9-5f9545732bc7-kube-api-access-4pnxh\") pod \"auto-csr-approver-29547862-td4lg\" (UID: \"256bcb0e-2dae-4547-a0d9-5f9545732bc7\") " pod="openshift-infra/auto-csr-approver-29547862-td4lg" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.324412 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pnxh\" (UniqueName: \"kubernetes.io/projected/256bcb0e-2dae-4547-a0d9-5f9545732bc7-kube-api-access-4pnxh\") pod \"auto-csr-approver-29547862-td4lg\" (UID: \"256bcb0e-2dae-4547-a0d9-5f9545732bc7\") " pod="openshift-infra/auto-csr-approver-29547862-td4lg" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.342201 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pnxh\" (UniqueName: \"kubernetes.io/projected/256bcb0e-2dae-4547-a0d9-5f9545732bc7-kube-api-access-4pnxh\") pod \"auto-csr-approver-29547862-td4lg\" (UID: \"256bcb0e-2dae-4547-a0d9-5f9545732bc7\") " pod="openshift-infra/auto-csr-approver-29547862-td4lg" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.488930 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547862-td4lg" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.986127 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547862-td4lg"] Mar 07 08:22:01 crc kubenswrapper[4761]: I0307 08:22:01.851186 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547862-td4lg" event={"ID":"256bcb0e-2dae-4547-a0d9-5f9545732bc7","Type":"ContainerStarted","Data":"ef302ea03bac25948d566baefdc632eaa310ed8a38f9be506d3dd65d3f40d9cc"} Mar 07 08:22:02 crc kubenswrapper[4761]: I0307 08:22:02.862426 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547862-td4lg" event={"ID":"256bcb0e-2dae-4547-a0d9-5f9545732bc7","Type":"ContainerStarted","Data":"79627cce3b9c042cfb01ec6002981e9c1693a4df041409e9c65c779592c48701"} Mar 07 08:22:02 crc kubenswrapper[4761]: I0307 08:22:02.878084 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547862-td4lg" podStartSLOduration=1.665771471 podStartE2EDuration="2.878068353s" podCreationTimestamp="2026-03-07 08:22:00 +0000 UTC" firstStartedPulling="2026-03-07 08:22:00.996172087 +0000 UTC m=+1977.905338592" lastFinishedPulling="2026-03-07 08:22:02.208468989 +0000 UTC m=+1979.117635474" observedRunningTime="2026-03-07 08:22:02.874943627 +0000 UTC m=+1979.784110122" watchObservedRunningTime="2026-03-07 08:22:02.878068353 +0000 UTC m=+1979.787234828" Mar 07 08:22:03 crc kubenswrapper[4761]: I0307 08:22:03.872973 4761 generic.go:334] "Generic (PLEG): container finished" podID="256bcb0e-2dae-4547-a0d9-5f9545732bc7" containerID="79627cce3b9c042cfb01ec6002981e9c1693a4df041409e9c65c779592c48701" exitCode=0 Mar 07 08:22:03 crc kubenswrapper[4761]: I0307 08:22:03.873057 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547862-td4lg" event={"ID":"256bcb0e-2dae-4547-a0d9-5f9545732bc7","Type":"ContainerDied","Data":"79627cce3b9c042cfb01ec6002981e9c1693a4df041409e9c65c779592c48701"} Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.343174 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547862-td4lg" Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.457423 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pnxh\" (UniqueName: \"kubernetes.io/projected/256bcb0e-2dae-4547-a0d9-5f9545732bc7-kube-api-access-4pnxh\") pod \"256bcb0e-2dae-4547-a0d9-5f9545732bc7\" (UID: \"256bcb0e-2dae-4547-a0d9-5f9545732bc7\") " Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.464572 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/256bcb0e-2dae-4547-a0d9-5f9545732bc7-kube-api-access-4pnxh" (OuterVolumeSpecName: "kube-api-access-4pnxh") pod "256bcb0e-2dae-4547-a0d9-5f9545732bc7" (UID: "256bcb0e-2dae-4547-a0d9-5f9545732bc7"). InnerVolumeSpecName "kube-api-access-4pnxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.562816 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pnxh\" (UniqueName: \"kubernetes.io/projected/256bcb0e-2dae-4547-a0d9-5f9545732bc7-kube-api-access-4pnxh\") on node \"crc\" DevicePath \"\"" Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.705505 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:22:05 crc kubenswrapper[4761]: E0307 08:22:05.706166 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.906944 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547862-td4lg" event={"ID":"256bcb0e-2dae-4547-a0d9-5f9545732bc7","Type":"ContainerDied","Data":"ef302ea03bac25948d566baefdc632eaa310ed8a38f9be506d3dd65d3f40d9cc"} Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.906994 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef302ea03bac25948d566baefdc632eaa310ed8a38f9be506d3dd65d3f40d9cc" Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.907000 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547862-td4lg" Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.963282 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547856-zvszx"] Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.975300 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547856-zvszx"] Mar 07 08:22:07 crc kubenswrapper[4761]: I0307 08:22:07.718059 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9136161-bf41-4d51-8873-1862fc46f1ea" path="/var/lib/kubelet/pods/d9136161-bf41-4d51-8873-1862fc46f1ea/volumes" Mar 07 08:22:13 crc kubenswrapper[4761]: I0307 08:22:13.097525 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vthx6"] Mar 07 08:22:13 crc kubenswrapper[4761]: I0307 08:22:13.109926 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vthx6"] Mar 07 08:22:13 crc kubenswrapper[4761]: I0307 08:22:13.726741 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aa749a9-f668-4927-8a9a-28df83640ac4" path="/var/lib/kubelet/pods/0aa749a9-f668-4927-8a9a-28df83640ac4/volumes" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.444036 4761 scope.go:117] "RemoveContainer" containerID="f9efffad10394a551925d203976714f1d199a96ec9a9d78778c7d97eb32fec2c" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.480863 4761 scope.go:117] "RemoveContainer" containerID="f625278ad061e03435fe6dd38c6b918071ccbe277752ebf56038dc3f252be709" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.562133 4761 scope.go:117] "RemoveContainer" containerID="9388e27b172f2bb94960bcb3ae75f0505a3ee7ade70af79044c0ce8363c56503" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.636920 4761 scope.go:117] "RemoveContainer" containerID="542f79b9da20217da4609522244e7105c548cdfef4734a40d1dafb1bb2fb8f49" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.674883 4761 scope.go:117] "RemoveContainer" containerID="2dd284c471d3dab40868b7f4a2f639ee7f217f8244cd3b21fbf7065bef24cb93" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.758335 4761 scope.go:117] "RemoveContainer" containerID="894118d7d8b95a32c8f3ddf3e2f498ea4edd0ef3d4c6251c424e04fb6574d11a" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.787137 4761 scope.go:117] "RemoveContainer" containerID="c03ac32aaa97dba1c311494ead8833dd468ddd521d71d0daa9a777f906ff04e3" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.826387 4761 scope.go:117] "RemoveContainer" containerID="b305d8cec5e50079f6c2ae9f3ecf5ce4a21203d5c8c4e48dd5c5f168bcb4870f" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.863050 4761 scope.go:117] "RemoveContainer" containerID="552ada8980f0b2062dc812b73b1d81fa326f40eda6c62f34bd26a1ce3804cc8d" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.884922 4761 scope.go:117] "RemoveContainer" containerID="7dc0901d8bff55c1c74207d6bd5522c5c55621687f407e19c00a0a08ad96732d" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.915030 4761 scope.go:117] "RemoveContainer" containerID="0f58c4fafff0cb8ab97e33e3ab3d9ce6836a3e4ac1439c19a573c21811185fee" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.939458 4761 scope.go:117] "RemoveContainer" containerID="fe7c46f93fcb404a48fdfddcf53140cbe34999481e23b77955840ad956bcf535" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.964534 4761 scope.go:117] "RemoveContainer" containerID="213af97bbe0e3ae38c1d1515fc22f6b13311545e5a40f677bbee0870e83ed3ae" Mar 07 08:22:17 crc kubenswrapper[4761]: I0307 08:22:17.706620 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:22:17 crc kubenswrapper[4761]: E0307 08:22:17.707007 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:22:27 crc kubenswrapper[4761]: I0307 08:22:27.035259 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-kwf9k"] Mar 07 08:22:27 crc kubenswrapper[4761]: I0307 08:22:27.047439 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-kwf9k"] Mar 07 08:22:27 crc kubenswrapper[4761]: I0307 08:22:27.729886 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1302a491-8b5e-4d96-a192-ae81c6396870" path="/var/lib/kubelet/pods/1302a491-8b5e-4d96-a192-ae81c6396870/volumes" Mar 07 08:22:28 crc kubenswrapper[4761]: I0307 08:22:28.706508 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:22:28 crc kubenswrapper[4761]: E0307 08:22:28.707230 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:22:31 crc kubenswrapper[4761]: I0307 08:22:31.051084 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mb4ct"] Mar 07 08:22:31 crc kubenswrapper[4761]: I0307 08:22:31.064640 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mb4ct"] Mar 07 08:22:31 crc kubenswrapper[4761]: I0307 08:22:31.733881 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30f40316-2c99-4892-b3c5-9e3e61f05212" path="/var/lib/kubelet/pods/30f40316-2c99-4892-b3c5-9e3e61f05212/volumes" Mar 07 08:22:34 crc kubenswrapper[4761]: I0307 08:22:34.056973 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-wnsq8"] Mar 07 08:22:34 crc kubenswrapper[4761]: I0307 08:22:34.071129 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-wnsq8"] Mar 07 08:22:35 crc kubenswrapper[4761]: I0307 08:22:35.720023 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b3dba79-45f7-4154-9691-fa333ba6ad0d" path="/var/lib/kubelet/pods/9b3dba79-45f7-4154-9691-fa333ba6ad0d/volumes" Mar 07 08:22:39 crc kubenswrapper[4761]: I0307 08:22:39.706748 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:22:39 crc kubenswrapper[4761]: E0307 08:22:39.708170 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:22:49 crc kubenswrapper[4761]: I0307 08:22:49.050887 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-d9psc"] Mar 07 08:22:49 crc kubenswrapper[4761]: I0307 08:22:49.066175 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-d9psc"] Mar 07 08:22:49 crc kubenswrapper[4761]: I0307 08:22:49.716737 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782631b9-e01d-424c-af31-3471bfdf1587" path="/var/lib/kubelet/pods/782631b9-e01d-424c-af31-3471bfdf1587/volumes" Mar 07 08:22:52 crc kubenswrapper[4761]: I0307 08:22:52.706487 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:22:52 crc kubenswrapper[4761]: E0307 08:22:52.707594 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:23:07 crc kubenswrapper[4761]: I0307 08:23:07.707424 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:23:07 crc kubenswrapper[4761]: E0307 08:23:07.709097 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:23:17 crc kubenswrapper[4761]: I0307 08:23:17.261645 4761 scope.go:117] "RemoveContainer" containerID="f9d5ffeebc50db6db5ddcbc389945c33747c9e0d2dcc1353c4f6cd5238374d8b" Mar 07 08:23:17 crc kubenswrapper[4761]: I0307 08:23:17.293639 4761 scope.go:117] "RemoveContainer" containerID="72c5aef6ae252c2f4b34e163aee65c7757addb3a89f37b5d72863ebaa2775b47" Mar 07 08:23:17 crc kubenswrapper[4761]: I0307 08:23:17.350345 4761 scope.go:117] "RemoveContainer" containerID="a4cceda235cdb340157db8083fb5a763bc0408a1d5edeb08189f027c6a110169" Mar 07 08:23:17 crc kubenswrapper[4761]: I0307 08:23:17.410888 4761 scope.go:117] "RemoveContainer" containerID="c28cc09420ea2ac493abf8f06587bcec5b390f6464161eeca9b61f712c64b3e1" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.575867 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mkbnc"] Mar 07 08:23:18 crc kubenswrapper[4761]: E0307 08:23:18.576577 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256bcb0e-2dae-4547-a0d9-5f9545732bc7" containerName="oc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.576588 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="256bcb0e-2dae-4547-a0d9-5f9545732bc7" containerName="oc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.576850 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="256bcb0e-2dae-4547-a0d9-5f9545732bc7" containerName="oc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.578879 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.592272 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkbnc"] Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.733742 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9b7h\" (UniqueName: \"kubernetes.io/projected/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-kube-api-access-c9b7h\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.734241 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-utilities\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.735089 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-catalog-content\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.837622 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9b7h\" (UniqueName: \"kubernetes.io/projected/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-kube-api-access-c9b7h\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.837778 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-utilities\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.837866 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-catalog-content\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.838431 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-catalog-content\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.838456 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-utilities\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.861929 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9b7h\" (UniqueName: \"kubernetes.io/projected/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-kube-api-access-c9b7h\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.956315 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:19 crc kubenswrapper[4761]: I0307 08:23:19.493707 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkbnc"] Mar 07 08:23:19 crc kubenswrapper[4761]: I0307 08:23:19.949897 4761 generic.go:334] "Generic (PLEG): container finished" podID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerID="a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a" exitCode=0 Mar 07 08:23:19 crc kubenswrapper[4761]: I0307 08:23:19.949964 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkbnc" event={"ID":"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf","Type":"ContainerDied","Data":"a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a"} Mar 07 08:23:19 crc kubenswrapper[4761]: I0307 08:23:19.951215 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkbnc" event={"ID":"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf","Type":"ContainerStarted","Data":"cbd4b766778c1b6e924d54a6d3bdf3f61405e024f607cc35923c75f35e9024cb"} Mar 07 08:23:19 crc kubenswrapper[4761]: I0307 08:23:19.952350 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:23:21 crc kubenswrapper[4761]: I0307 08:23:21.705680 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:23:21 crc kubenswrapper[4761]: E0307 08:23:21.706601 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:23:21 crc kubenswrapper[4761]: I0307 08:23:21.986961 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkbnc" event={"ID":"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf","Type":"ContainerStarted","Data":"b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65"} Mar 07 08:23:24 crc kubenswrapper[4761]: I0307 08:23:24.011774 4761 generic.go:334] "Generic (PLEG): container finished" podID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerID="b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65" exitCode=0 Mar 07 08:23:24 crc kubenswrapper[4761]: I0307 08:23:24.011874 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkbnc" event={"ID":"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf","Type":"ContainerDied","Data":"b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65"} Mar 07 08:23:25 crc kubenswrapper[4761]: I0307 08:23:25.028013 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkbnc" event={"ID":"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf","Type":"ContainerStarted","Data":"b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920"} Mar 07 08:23:25 crc kubenswrapper[4761]: I0307 08:23:25.053805 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mkbnc" podStartSLOduration=2.537630139 podStartE2EDuration="7.053786748s" podCreationTimestamp="2026-03-07 08:23:18 +0000 UTC" firstStartedPulling="2026-03-07 08:23:19.952079961 +0000 UTC m=+2056.861246456" lastFinishedPulling="2026-03-07 08:23:24.46823655 +0000 UTC m=+2061.377403065" observedRunningTime="2026-03-07 08:23:25.048563687 +0000 UTC m=+2061.957730172" watchObservedRunningTime="2026-03-07 08:23:25.053786748 +0000 UTC m=+2061.962953223" Mar 07 08:23:27 crc kubenswrapper[4761]: I0307 08:23:27.051905 4761 generic.go:334] "Generic (PLEG): container finished" podID="1ee12ec5-76cf-4824-9882-d55c16a3c08e" containerID="69b53d2d904bce2a677b915a6b2d44900d2544b881b943ea96f71ced29b52b11" exitCode=0 Mar 07 08:23:27 crc kubenswrapper[4761]: I0307 08:23:27.052049 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" event={"ID":"1ee12ec5-76cf-4824-9882-d55c16a3c08e","Type":"ContainerDied","Data":"69b53d2d904bce2a677b915a6b2d44900d2544b881b943ea96f71ced29b52b11"} Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.614562 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.642994 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj47v\" (UniqueName: \"kubernetes.io/projected/1ee12ec5-76cf-4824-9882-d55c16a3c08e-kube-api-access-sj47v\") pod \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.643096 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-inventory\") pod \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.643127 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-ssh-key-openstack-edpm-ipam\") pod \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.688064 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee12ec5-76cf-4824-9882-d55c16a3c08e-kube-api-access-sj47v" (OuterVolumeSpecName: "kube-api-access-sj47v") pod "1ee12ec5-76cf-4824-9882-d55c16a3c08e" (UID: "1ee12ec5-76cf-4824-9882-d55c16a3c08e"). InnerVolumeSpecName "kube-api-access-sj47v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.691924 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1ee12ec5-76cf-4824-9882-d55c16a3c08e" (UID: "1ee12ec5-76cf-4824-9882-d55c16a3c08e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.699347 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-inventory" (OuterVolumeSpecName: "inventory") pod "1ee12ec5-76cf-4824-9882-d55c16a3c08e" (UID: "1ee12ec5-76cf-4824-9882-d55c16a3c08e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.746987 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.747160 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.747244 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj47v\" (UniqueName: \"kubernetes.io/projected/1ee12ec5-76cf-4824-9882-d55c16a3c08e-kube-api-access-sj47v\") on node \"crc\" DevicePath \"\"" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.957042 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.957124 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.019346 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.078067 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.078120 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" event={"ID":"1ee12ec5-76cf-4824-9882-d55c16a3c08e","Type":"ContainerDied","Data":"1b358518e6c476a57a727cf767061051d35b33836924847906a2d4472d3f3da9"} Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.078144 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b358518e6c476a57a727cf767061051d35b33836924847906a2d4472d3f3da9" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.174091 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.180457 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h"] Mar 07 08:23:29 crc kubenswrapper[4761]: E0307 08:23:29.181170 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee12ec5-76cf-4824-9882-d55c16a3c08e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.181199 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee12ec5-76cf-4824-9882-d55c16a3c08e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.181577 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee12ec5-76cf-4824-9882-d55c16a3c08e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.182599 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.187852 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.188990 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.189998 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.193480 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h"] Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.194586 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.265225 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.265451 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.266017 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqxqk\" (UniqueName: \"kubernetes.io/projected/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-kube-api-access-gqxqk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.276839 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkbnc"] Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.368131 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.368308 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqxqk\" (UniqueName: \"kubernetes.io/projected/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-kube-api-access-gqxqk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.368428 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.372633 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.372682 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.386087 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqxqk\" (UniqueName: \"kubernetes.io/projected/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-kube-api-access-gqxqk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.506119 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:30 crc kubenswrapper[4761]: I0307 08:23:30.220165 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h"] Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.101169 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" event={"ID":"c36e1db2-a57f-46b3-9271-7ba8586fc8b2","Type":"ContainerStarted","Data":"91d9179d1900a693f7041bbfae4987b2f4ad965f2cad6153af32867bcdeaf51e"} Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.101328 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mkbnc" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="registry-server" containerID="cri-o://b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920" gracePeriod=2 Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.526043 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.644547 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9b7h\" (UniqueName: \"kubernetes.io/projected/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-kube-api-access-c9b7h\") pod \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.644807 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-catalog-content\") pod \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.644917 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-utilities\") pod \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.645604 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-utilities" (OuterVolumeSpecName: "utilities") pod "ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" (UID: "ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.652985 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-kube-api-access-c9b7h" (OuterVolumeSpecName: "kube-api-access-c9b7h") pod "ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" (UID: "ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf"). InnerVolumeSpecName "kube-api-access-c9b7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.732423 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" (UID: "ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.749641 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.749695 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.749742 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9b7h\" (UniqueName: \"kubernetes.io/projected/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-kube-api-access-c9b7h\") on node \"crc\" DevicePath \"\"" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.121003 4761 generic.go:334] "Generic (PLEG): container finished" podID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerID="b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920" exitCode=0 Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.121061 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkbnc" event={"ID":"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf","Type":"ContainerDied","Data":"b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920"} Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.121120 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.121209 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkbnc" event={"ID":"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf","Type":"ContainerDied","Data":"cbd4b766778c1b6e924d54a6d3bdf3f61405e024f607cc35923c75f35e9024cb"} Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.121256 4761 scope.go:117] "RemoveContainer" containerID="b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.128596 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" event={"ID":"c36e1db2-a57f-46b3-9271-7ba8586fc8b2","Type":"ContainerStarted","Data":"a937cdf0dc036d35b334bc793da423b2b40ab52bd5ced44329980bec5ac6dbfc"} Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.167954 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" podStartSLOduration=2.542825897 podStartE2EDuration="3.167932816s" podCreationTimestamp="2026-03-07 08:23:29 +0000 UTC" firstStartedPulling="2026-03-07 08:23:30.232068862 +0000 UTC m=+2067.141235367" lastFinishedPulling="2026-03-07 08:23:30.857175801 +0000 UTC m=+2067.766342286" observedRunningTime="2026-03-07 08:23:32.154225233 +0000 UTC m=+2069.063391718" watchObservedRunningTime="2026-03-07 08:23:32.167932816 +0000 UTC m=+2069.077099311" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.178029 4761 scope.go:117] "RemoveContainer" containerID="b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.192311 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkbnc"] Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.207889 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mkbnc"] Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.218707 4761 scope.go:117] "RemoveContainer" containerID="a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.276692 4761 scope.go:117] "RemoveContainer" containerID="b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920" Mar 07 08:23:32 crc kubenswrapper[4761]: E0307 08:23:32.277148 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920\": container with ID starting with b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920 not found: ID does not exist" containerID="b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.277196 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920"} err="failed to get container status \"b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920\": rpc error: code = NotFound desc = could not find container \"b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920\": container with ID starting with b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920 not found: ID does not exist" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.277230 4761 scope.go:117] "RemoveContainer" containerID="b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65" Mar 07 08:23:32 crc kubenswrapper[4761]: E0307 08:23:32.277590 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65\": container with ID starting with b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65 not found: ID does not exist" containerID="b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.277637 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65"} err="failed to get container status \"b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65\": rpc error: code = NotFound desc = could not find container \"b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65\": container with ID starting with b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65 not found: ID does not exist" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.277657 4761 scope.go:117] "RemoveContainer" containerID="a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a" Mar 07 08:23:32 crc kubenswrapper[4761]: E0307 08:23:32.278236 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a\": container with ID starting with a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a not found: ID does not exist" containerID="a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.278288 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a"} err="failed to get container status \"a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a\": rpc error: code = NotFound desc = could not find container \"a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a\": container with ID starting with a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a not found: ID does not exist" Mar 07 08:23:33 crc kubenswrapper[4761]: I0307 08:23:33.724187 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:23:33 crc kubenswrapper[4761]: E0307 08:23:33.725129 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:23:33 crc kubenswrapper[4761]: I0307 08:23:33.745905 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" path="/var/lib/kubelet/pods/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf/volumes" Mar 07 08:23:37 crc kubenswrapper[4761]: I0307 08:23:37.054959 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-79a2-account-create-update-dj29x"] Mar 07 08:23:37 crc kubenswrapper[4761]: I0307 08:23:37.067318 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-pw6jj"] Mar 07 08:23:37 crc kubenswrapper[4761]: I0307 08:23:37.077990 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-pw6jj"] Mar 07 08:23:37 crc kubenswrapper[4761]: I0307 08:23:37.092431 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-79a2-account-create-update-dj29x"] Mar 07 08:23:37 crc kubenswrapper[4761]: I0307 08:23:37.720636 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2142964f-61fc-4ae0-af75-f6a72e968294" path="/var/lib/kubelet/pods/2142964f-61fc-4ae0-af75-f6a72e968294/volumes" Mar 07 08:23:37 crc kubenswrapper[4761]: I0307 08:23:37.723080 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="856a8ecd-1cf0-4150-9527-c457571785bd" path="/var/lib/kubelet/pods/856a8ecd-1cf0-4150-9527-c457571785bd/volumes" Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.060647 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-69bc-account-create-update-jxq5h"] Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.089828 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-69bc-account-create-update-jxq5h"] Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.105242 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-9vzc2"] Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.116761 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-9vzc2"] Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.130778 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-172f-account-create-update-cmtmp"] Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.137673 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-172f-account-create-update-cmtmp"] Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.157379 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-8dtv6"] Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.175728 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-8dtv6"] Mar 07 08:23:39 crc kubenswrapper[4761]: I0307 08:23:39.731283 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eaf7dcd-b827-450a-8ac6-9953588f7697" path="/var/lib/kubelet/pods/2eaf7dcd-b827-450a-8ac6-9953588f7697/volumes" Mar 07 08:23:39 crc kubenswrapper[4761]: I0307 08:23:39.732932 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="803bf161-8aed-4d86-bb34-7664bfa5a21d" path="/var/lib/kubelet/pods/803bf161-8aed-4d86-bb34-7664bfa5a21d/volumes" Mar 07 08:23:39 crc kubenswrapper[4761]: I0307 08:23:39.734235 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a467587-eec2-4610-af1d-e666203cdddb" path="/var/lib/kubelet/pods/9a467587-eec2-4610-af1d-e666203cdddb/volumes" Mar 07 08:23:39 crc kubenswrapper[4761]: I0307 08:23:39.735540 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f77b840-931c-4b69-a2e4-23c7bf19f14e" path="/var/lib/kubelet/pods/9f77b840-931c-4b69-a2e4-23c7bf19f14e/volumes" Mar 07 08:23:46 crc kubenswrapper[4761]: I0307 08:23:46.706287 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:23:47 crc kubenswrapper[4761]: I0307 08:23:47.320313 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"4205e887a96e2c7dfc1520ac45c44653f6029f5d7474aa135bc6c6eb298eb9d6"} Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.156553 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547864-s7kqr"] Mar 07 08:24:00 crc kubenswrapper[4761]: E0307 08:24:00.157802 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="registry-server" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.157823 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="registry-server" Mar 07 08:24:00 crc kubenswrapper[4761]: E0307 08:24:00.157862 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="extract-utilities" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.157870 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="extract-utilities" Mar 07 08:24:00 crc kubenswrapper[4761]: E0307 08:24:00.157889 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="extract-content" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.157898 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="extract-content" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.158219 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="registry-server" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.159413 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547864-s7kqr" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.162398 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.162901 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.172202 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.198518 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547864-s7kqr"] Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.301595 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8zmz\" (UniqueName: \"kubernetes.io/projected/2269f929-4b06-4694-8123-6741b2adfa58-kube-api-access-z8zmz\") pod \"auto-csr-approver-29547864-s7kqr\" (UID: \"2269f929-4b06-4694-8123-6741b2adfa58\") " pod="openshift-infra/auto-csr-approver-29547864-s7kqr" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.404405 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8zmz\" (UniqueName: \"kubernetes.io/projected/2269f929-4b06-4694-8123-6741b2adfa58-kube-api-access-z8zmz\") pod \"auto-csr-approver-29547864-s7kqr\" (UID: \"2269f929-4b06-4694-8123-6741b2adfa58\") " pod="openshift-infra/auto-csr-approver-29547864-s7kqr" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.422945 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8zmz\" (UniqueName: \"kubernetes.io/projected/2269f929-4b06-4694-8123-6741b2adfa58-kube-api-access-z8zmz\") pod \"auto-csr-approver-29547864-s7kqr\" (UID: \"2269f929-4b06-4694-8123-6741b2adfa58\") " pod="openshift-infra/auto-csr-approver-29547864-s7kqr" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.486944 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547864-s7kqr" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.960977 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547864-s7kqr"] Mar 07 08:24:01 crc kubenswrapper[4761]: I0307 08:24:01.489993 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547864-s7kqr" event={"ID":"2269f929-4b06-4694-8123-6741b2adfa58","Type":"ContainerStarted","Data":"4541c9326c08a2494bb89df7a569daf1283d5b70952b71a2aba9542c2062e161"} Mar 07 08:24:02 crc kubenswrapper[4761]: I0307 08:24:02.504982 4761 generic.go:334] "Generic (PLEG): container finished" podID="2269f929-4b06-4694-8123-6741b2adfa58" containerID="090a7e140ac0a1c9c2a8e95ff23a018d80262e37d5a50b31d6f03c5d5e1dc22c" exitCode=0 Mar 07 08:24:02 crc kubenswrapper[4761]: I0307 08:24:02.505172 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547864-s7kqr" event={"ID":"2269f929-4b06-4694-8123-6741b2adfa58","Type":"ContainerDied","Data":"090a7e140ac0a1c9c2a8e95ff23a018d80262e37d5a50b31d6f03c5d5e1dc22c"} Mar 07 08:24:03 crc kubenswrapper[4761]: E0307 08:24:03.698174 4761 info.go:109] Failed to get network devices: open /sys/class/net/4541c9326c08a24/address: no such file or directory Mar 07 08:24:03 crc kubenswrapper[4761]: I0307 08:24:03.947545 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547864-s7kqr" Mar 07 08:24:04 crc kubenswrapper[4761]: I0307 08:24:04.104384 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8zmz\" (UniqueName: \"kubernetes.io/projected/2269f929-4b06-4694-8123-6741b2adfa58-kube-api-access-z8zmz\") pod \"2269f929-4b06-4694-8123-6741b2adfa58\" (UID: \"2269f929-4b06-4694-8123-6741b2adfa58\") " Mar 07 08:24:04 crc kubenswrapper[4761]: I0307 08:24:04.115364 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2269f929-4b06-4694-8123-6741b2adfa58-kube-api-access-z8zmz" (OuterVolumeSpecName: "kube-api-access-z8zmz") pod "2269f929-4b06-4694-8123-6741b2adfa58" (UID: "2269f929-4b06-4694-8123-6741b2adfa58"). InnerVolumeSpecName "kube-api-access-z8zmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:24:04 crc kubenswrapper[4761]: I0307 08:24:04.207138 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8zmz\" (UniqueName: \"kubernetes.io/projected/2269f929-4b06-4694-8123-6741b2adfa58-kube-api-access-z8zmz\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:04 crc kubenswrapper[4761]: I0307 08:24:04.531201 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547864-s7kqr" event={"ID":"2269f929-4b06-4694-8123-6741b2adfa58","Type":"ContainerDied","Data":"4541c9326c08a2494bb89df7a569daf1283d5b70952b71a2aba9542c2062e161"} Mar 07 08:24:04 crc kubenswrapper[4761]: I0307 08:24:04.531435 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4541c9326c08a2494bb89df7a569daf1283d5b70952b71a2aba9542c2062e161" Mar 07 08:24:04 crc kubenswrapper[4761]: I0307 08:24:04.531320 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547864-s7kqr" Mar 07 08:24:05 crc kubenswrapper[4761]: I0307 08:24:05.035167 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547858-dj8v9"] Mar 07 08:24:05 crc kubenswrapper[4761]: I0307 08:24:05.044997 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547858-dj8v9"] Mar 07 08:24:05 crc kubenswrapper[4761]: I0307 08:24:05.719399 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ac045a-b834-4663-9efa-3b594a7f206f" path="/var/lib/kubelet/pods/b8ac045a-b834-4663-9efa-3b594a7f206f/volumes" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.471788 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5bjj4"] Mar 07 08:24:13 crc kubenswrapper[4761]: E0307 08:24:13.475477 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2269f929-4b06-4694-8123-6741b2adfa58" containerName="oc" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.475504 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2269f929-4b06-4694-8123-6741b2adfa58" containerName="oc" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.475795 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2269f929-4b06-4694-8123-6741b2adfa58" containerName="oc" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.477501 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.499118 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bjj4"] Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.554410 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzwkl\" (UniqueName: \"kubernetes.io/projected/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-kube-api-access-kzwkl\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.554482 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-catalog-content\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.554731 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-utilities\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.657026 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzwkl\" (UniqueName: \"kubernetes.io/projected/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-kube-api-access-kzwkl\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.657099 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-catalog-content\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.657284 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-utilities\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.657698 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-catalog-content\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.657710 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-utilities\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.676751 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzwkl\" (UniqueName: \"kubernetes.io/projected/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-kube-api-access-kzwkl\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.806993 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:14 crc kubenswrapper[4761]: I0307 08:24:14.361195 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bjj4"] Mar 07 08:24:14 crc kubenswrapper[4761]: I0307 08:24:14.661081 4761 generic.go:334] "Generic (PLEG): container finished" podID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerID="6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a" exitCode=0 Mar 07 08:24:14 crc kubenswrapper[4761]: I0307 08:24:14.661187 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjj4" event={"ID":"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b","Type":"ContainerDied","Data":"6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a"} Mar 07 08:24:14 crc kubenswrapper[4761]: I0307 08:24:14.661452 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjj4" event={"ID":"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b","Type":"ContainerStarted","Data":"9056333e59253f7f82b14a412022ec7f901c8f8d49a9d4aee3d227a612a81e46"} Mar 07 08:24:15 crc kubenswrapper[4761]: I0307 08:24:15.684661 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjj4" event={"ID":"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b","Type":"ContainerStarted","Data":"de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5"} Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.592771 4761 scope.go:117] "RemoveContainer" containerID="4876e046a7b6a600c7be9a7f1e443d545d71c97d3c11f39a16f16d32c1322116" Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.658431 4761 scope.go:117] "RemoveContainer" containerID="4278c6d7e37afe8132d9584f5a1a8ff6192cc21ad46705e83ef3316d86918aff" Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.709423 4761 generic.go:334] "Generic (PLEG): container finished" podID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerID="de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5" exitCode=0 Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.721222 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjj4" event={"ID":"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b","Type":"ContainerDied","Data":"de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5"} Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.756307 4761 scope.go:117] "RemoveContainer" containerID="8bd1714162f5fffdc0f00791d72262d374eef35faf0b19a884566f7b4045c8a0" Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.794112 4761 scope.go:117] "RemoveContainer" containerID="7e5c076375addd1c3b05b3e3c6c2449ad7b80520631cb308c2a677abe8bce2d0" Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.852023 4761 scope.go:117] "RemoveContainer" containerID="f5c225d3c383fc2428ebdbaef59f7c19afff3acb77d8d8c8541b440f91e5c607" Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.911292 4761 scope.go:117] "RemoveContainer" containerID="149b48cf85012d70b4ae66bce7176663f91468b88970a035d6273065ef6b64fd" Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.957436 4761 scope.go:117] "RemoveContainer" containerID="d833b981b4691270dca8f538b2b902fc383572783c4ddf6451d1d99578a88b14" Mar 07 08:24:18 crc kubenswrapper[4761]: I0307 08:24:18.723474 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjj4" event={"ID":"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b","Type":"ContainerStarted","Data":"365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42"} Mar 07 08:24:18 crc kubenswrapper[4761]: I0307 08:24:18.754842 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5bjj4" podStartSLOduration=2.23470447 podStartE2EDuration="5.754819978s" podCreationTimestamp="2026-03-07 08:24:13 +0000 UTC" firstStartedPulling="2026-03-07 08:24:14.663300666 +0000 UTC m=+2111.572467151" lastFinishedPulling="2026-03-07 08:24:18.183416184 +0000 UTC m=+2115.092582659" observedRunningTime="2026-03-07 08:24:18.745201697 +0000 UTC m=+2115.654368172" watchObservedRunningTime="2026-03-07 08:24:18.754819978 +0000 UTC m=+2115.663986443" Mar 07 08:24:19 crc kubenswrapper[4761]: I0307 08:24:19.058910 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7wm25"] Mar 07 08:24:19 crc kubenswrapper[4761]: I0307 08:24:19.073352 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7wm25"] Mar 07 08:24:19 crc kubenswrapper[4761]: I0307 08:24:19.719191 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" path="/var/lib/kubelet/pods/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a/volumes" Mar 07 08:24:23 crc kubenswrapper[4761]: I0307 08:24:23.807620 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:23 crc kubenswrapper[4761]: I0307 08:24:23.808339 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:23 crc kubenswrapper[4761]: I0307 08:24:23.871273 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:24 crc kubenswrapper[4761]: I0307 08:24:24.900409 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:24 crc kubenswrapper[4761]: I0307 08:24:24.988863 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5bjj4"] Mar 07 08:24:26 crc kubenswrapper[4761]: I0307 08:24:26.841130 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5bjj4" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="registry-server" containerID="cri-o://365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42" gracePeriod=2 Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.441979 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.541656 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-catalog-content\") pod \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.541762 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-utilities\") pod \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.541905 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzwkl\" (UniqueName: \"kubernetes.io/projected/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-kube-api-access-kzwkl\") pod \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.542872 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-utilities" (OuterVolumeSpecName: "utilities") pod "76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" (UID: "76644d6a-a16c-42b0-8cb7-0f75a62a0d7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.547553 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-kube-api-access-kzwkl" (OuterVolumeSpecName: "kube-api-access-kzwkl") pod "76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" (UID: "76644d6a-a16c-42b0-8cb7-0f75a62a0d7b"). InnerVolumeSpecName "kube-api-access-kzwkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.607096 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" (UID: "76644d6a-a16c-42b0-8cb7-0f75a62a0d7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.645516 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.645563 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.645577 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzwkl\" (UniqueName: \"kubernetes.io/projected/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-kube-api-access-kzwkl\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.852590 4761 generic.go:334] "Generic (PLEG): container finished" podID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerID="365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42" exitCode=0 Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.852632 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjj4" event={"ID":"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b","Type":"ContainerDied","Data":"365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42"} Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.852657 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjj4" event={"ID":"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b","Type":"ContainerDied","Data":"9056333e59253f7f82b14a412022ec7f901c8f8d49a9d4aee3d227a612a81e46"} Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.852679 4761 scope.go:117] "RemoveContainer" containerID="365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.852881 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.877528 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5bjj4"] Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.886626 4761 scope.go:117] "RemoveContainer" containerID="de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.888921 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5bjj4"] Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.918467 4761 scope.go:117] "RemoveContainer" containerID="6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.968130 4761 scope.go:117] "RemoveContainer" containerID="365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42" Mar 07 08:24:27 crc kubenswrapper[4761]: E0307 08:24:27.968475 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42\": container with ID starting with 365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42 not found: ID does not exist" containerID="365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.968517 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42"} err="failed to get container status \"365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42\": rpc error: code = NotFound desc = could not find container \"365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42\": container with ID starting with 365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42 not found: ID does not exist" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.968541 4761 scope.go:117] "RemoveContainer" containerID="de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5" Mar 07 08:24:27 crc kubenswrapper[4761]: E0307 08:24:27.968816 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5\": container with ID starting with de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5 not found: ID does not exist" containerID="de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.968842 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5"} err="failed to get container status \"de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5\": rpc error: code = NotFound desc = could not find container \"de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5\": container with ID starting with de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5 not found: ID does not exist" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.968859 4761 scope.go:117] "RemoveContainer" containerID="6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a" Mar 07 08:24:27 crc kubenswrapper[4761]: E0307 08:24:27.969186 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a\": container with ID starting with 6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a not found: ID does not exist" containerID="6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.969211 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a"} err="failed to get container status \"6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a\": rpc error: code = NotFound desc = could not find container \"6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a\": container with ID starting with 6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a not found: ID does not exist" Mar 07 08:24:29 crc kubenswrapper[4761]: I0307 08:24:29.731817 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" path="/var/lib/kubelet/pods/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b/volumes" Mar 07 08:24:40 crc kubenswrapper[4761]: I0307 08:24:39.999645 4761 generic.go:334] "Generic (PLEG): container finished" podID="c36e1db2-a57f-46b3-9271-7ba8586fc8b2" containerID="a937cdf0dc036d35b334bc793da423b2b40ab52bd5ced44329980bec5ac6dbfc" exitCode=0 Mar 07 08:24:40 crc kubenswrapper[4761]: I0307 08:24:39.999810 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" event={"ID":"c36e1db2-a57f-46b3-9271-7ba8586fc8b2","Type":"ContainerDied","Data":"a937cdf0dc036d35b334bc793da423b2b40ab52bd5ced44329980bec5ac6dbfc"} Mar 07 08:24:41 crc kubenswrapper[4761]: I0307 08:24:41.917789 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.003701 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqxqk\" (UniqueName: \"kubernetes.io/projected/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-kube-api-access-gqxqk\") pod \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.003848 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-inventory\") pod \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.003945 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-ssh-key-openstack-edpm-ipam\") pod \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.013845 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-kube-api-access-gqxqk" (OuterVolumeSpecName: "kube-api-access-gqxqk") pod "c36e1db2-a57f-46b3-9271-7ba8586fc8b2" (UID: "c36e1db2-a57f-46b3-9271-7ba8586fc8b2"). InnerVolumeSpecName "kube-api-access-gqxqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.042799 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-inventory" (OuterVolumeSpecName: "inventory") pod "c36e1db2-a57f-46b3-9271-7ba8586fc8b2" (UID: "c36e1db2-a57f-46b3-9271-7ba8586fc8b2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.045524 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-hg9sm"] Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.049262 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c36e1db2-a57f-46b3-9271-7ba8586fc8b2" (UID: "c36e1db2-a57f-46b3-9271-7ba8586fc8b2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.056232 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-hg9sm"] Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.106358 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqxqk\" (UniqueName: \"kubernetes.io/projected/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-kube-api-access-gqxqk\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.106405 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.106418 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.139649 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4"] Mar 07 08:24:42 crc kubenswrapper[4761]: E0307 08:24:42.140263 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c36e1db2-a57f-46b3-9271-7ba8586fc8b2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.140288 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36e1db2-a57f-46b3-9271-7ba8586fc8b2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 07 08:24:42 crc kubenswrapper[4761]: E0307 08:24:42.140327 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="extract-utilities" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.140337 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="extract-utilities" Mar 07 08:24:42 crc kubenswrapper[4761]: E0307 08:24:42.140360 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="registry-server" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.140369 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="registry-server" Mar 07 08:24:42 crc kubenswrapper[4761]: E0307 08:24:42.140390 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="extract-content" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.140398 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="extract-content" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.140645 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="registry-server" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.140690 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c36e1db2-a57f-46b3-9271-7ba8586fc8b2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.142911 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.154752 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4"] Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.208325 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg65r\" (UniqueName: \"kubernetes.io/projected/69faf2be-decb-4f75-be02-7f0d23bea59a-kube-api-access-zg65r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.208404 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.208686 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.310803 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.311176 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg65r\" (UniqueName: \"kubernetes.io/projected/69faf2be-decb-4f75-be02-7f0d23bea59a-kube-api-access-zg65r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.311231 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.316324 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.316380 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.328648 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg65r\" (UniqueName: \"kubernetes.io/projected/69faf2be-decb-4f75-be02-7f0d23bea59a-kube-api-access-zg65r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.411864 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" event={"ID":"c36e1db2-a57f-46b3-9271-7ba8586fc8b2","Type":"ContainerDied","Data":"91d9179d1900a693f7041bbfae4987b2f4ad965f2cad6153af32867bcdeaf51e"} Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.411912 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d9179d1900a693f7041bbfae4987b2f4ad965f2cad6153af32867bcdeaf51e" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.411901 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.461738 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:43 crc kubenswrapper[4761]: I0307 08:24:43.106822 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4"] Mar 07 08:24:43 crc kubenswrapper[4761]: I0307 08:24:43.429393 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" event={"ID":"69faf2be-decb-4f75-be02-7f0d23bea59a","Type":"ContainerStarted","Data":"659ec4842099d30096f35f9f7a4853d689141e0fd949e80abe217f54da28beaf"} Mar 07 08:24:43 crc kubenswrapper[4761]: I0307 08:24:43.726710 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dac6b04-d81b-43a0-8b71-ebaa8842366d" path="/var/lib/kubelet/pods/2dac6b04-d81b-43a0-8b71-ebaa8842366d/volumes" Mar 07 08:24:44 crc kubenswrapper[4761]: I0307 08:24:44.438462 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" event={"ID":"69faf2be-decb-4f75-be02-7f0d23bea59a","Type":"ContainerStarted","Data":"b05b71baca6d65772deeea5d4a340281d65ba093d96a03554b6cf6259c2c187b"} Mar 07 08:24:44 crc kubenswrapper[4761]: I0307 08:24:44.456596 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" podStartSLOduration=2.021783739 podStartE2EDuration="2.456578331s" podCreationTimestamp="2026-03-07 08:24:42 +0000 UTC" firstStartedPulling="2026-03-07 08:24:43.129750474 +0000 UTC m=+2140.038916949" lastFinishedPulling="2026-03-07 08:24:43.564545066 +0000 UTC m=+2140.473711541" observedRunningTime="2026-03-07 08:24:44.452968021 +0000 UTC m=+2141.362134506" watchObservedRunningTime="2026-03-07 08:24:44.456578331 +0000 UTC m=+2141.365744806" Mar 07 08:24:45 crc kubenswrapper[4761]: I0307 08:24:45.044016 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-77e3-account-create-update-8b9pf"] Mar 07 08:24:45 crc kubenswrapper[4761]: I0307 08:24:45.059160 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-ddvxb"] Mar 07 08:24:45 crc kubenswrapper[4761]: I0307 08:24:45.069308 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-77e3-account-create-update-8b9pf"] Mar 07 08:24:45 crc kubenswrapper[4761]: I0307 08:24:45.079849 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-ddvxb"] Mar 07 08:24:45 crc kubenswrapper[4761]: I0307 08:24:45.722826 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="130238c4-fadf-46e2-a802-0608b83ec9a2" path="/var/lib/kubelet/pods/130238c4-fadf-46e2-a802-0608b83ec9a2/volumes" Mar 07 08:24:45 crc kubenswrapper[4761]: I0307 08:24:45.727370 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f7b5d35-c686-46fe-9e07-8f95cba61e5b" path="/var/lib/kubelet/pods/2f7b5d35-c686-46fe-9e07-8f95cba61e5b/volumes" Mar 07 08:24:49 crc kubenswrapper[4761]: I0307 08:24:49.499059 4761 generic.go:334] "Generic (PLEG): container finished" podID="69faf2be-decb-4f75-be02-7f0d23bea59a" containerID="b05b71baca6d65772deeea5d4a340281d65ba093d96a03554b6cf6259c2c187b" exitCode=0 Mar 07 08:24:49 crc kubenswrapper[4761]: I0307 08:24:49.499136 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" event={"ID":"69faf2be-decb-4f75-be02-7f0d23bea59a","Type":"ContainerDied","Data":"b05b71baca6d65772deeea5d4a340281d65ba093d96a03554b6cf6259c2c187b"} Mar 07 08:24:50 crc kubenswrapper[4761]: I0307 08:24:50.039174 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hwxg9"] Mar 07 08:24:50 crc kubenswrapper[4761]: I0307 08:24:50.055433 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hwxg9"] Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.059337 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.171225 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg65r\" (UniqueName: \"kubernetes.io/projected/69faf2be-decb-4f75-be02-7f0d23bea59a-kube-api-access-zg65r\") pod \"69faf2be-decb-4f75-be02-7f0d23bea59a\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.171310 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-ssh-key-openstack-edpm-ipam\") pod \"69faf2be-decb-4f75-be02-7f0d23bea59a\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.171581 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-inventory\") pod \"69faf2be-decb-4f75-be02-7f0d23bea59a\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.176924 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69faf2be-decb-4f75-be02-7f0d23bea59a-kube-api-access-zg65r" (OuterVolumeSpecName: "kube-api-access-zg65r") pod "69faf2be-decb-4f75-be02-7f0d23bea59a" (UID: "69faf2be-decb-4f75-be02-7f0d23bea59a"). InnerVolumeSpecName "kube-api-access-zg65r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.207603 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-inventory" (OuterVolumeSpecName: "inventory") pod "69faf2be-decb-4f75-be02-7f0d23bea59a" (UID: "69faf2be-decb-4f75-be02-7f0d23bea59a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.226436 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "69faf2be-decb-4f75-be02-7f0d23bea59a" (UID: "69faf2be-decb-4f75-be02-7f0d23bea59a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.274640 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.274681 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg65r\" (UniqueName: \"kubernetes.io/projected/69faf2be-decb-4f75-be02-7f0d23bea59a-kube-api-access-zg65r\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.274698 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.523223 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" event={"ID":"69faf2be-decb-4f75-be02-7f0d23bea59a","Type":"ContainerDied","Data":"659ec4842099d30096f35f9f7a4853d689141e0fd949e80abe217f54da28beaf"} Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.523504 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="659ec4842099d30096f35f9f7a4853d689141e0fd949e80abe217f54da28beaf" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.523286 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.637087 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g"] Mar 07 08:24:51 crc kubenswrapper[4761]: E0307 08:24:51.637959 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69faf2be-decb-4f75-be02-7f0d23bea59a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.638099 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="69faf2be-decb-4f75-be02-7f0d23bea59a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.638515 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="69faf2be-decb-4f75-be02-7f0d23bea59a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.639679 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.642126 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.642422 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.642586 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.642863 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.651910 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g"] Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.722797 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4931aa42-2c29-4ec8-ba24-e90210ad1aca" path="/var/lib/kubelet/pods/4931aa42-2c29-4ec8-ba24-e90210ad1aca/volumes" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.786694 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.786877 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbbhc\" (UniqueName: \"kubernetes.io/projected/0e1e8856-bbd9-4931-af28-f508ce15b034-kube-api-access-jbbhc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.787066 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.889420 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.889518 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbbhc\" (UniqueName: \"kubernetes.io/projected/0e1e8856-bbd9-4931-af28-f508ce15b034-kube-api-access-jbbhc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.889630 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.895250 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.896145 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.912897 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbbhc\" (UniqueName: \"kubernetes.io/projected/0e1e8856-bbd9-4931-af28-f508ce15b034-kube-api-access-jbbhc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.962860 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:52 crc kubenswrapper[4761]: I0307 08:24:52.524219 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g"] Mar 07 08:24:53 crc kubenswrapper[4761]: I0307 08:24:53.547537 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" event={"ID":"0e1e8856-bbd9-4931-af28-f508ce15b034","Type":"ContainerStarted","Data":"75df134065de33e631664fbb28c486be6cdd6127864839bb0554d8d6e6293a37"} Mar 07 08:24:53 crc kubenswrapper[4761]: I0307 08:24:53.547927 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" event={"ID":"0e1e8856-bbd9-4931-af28-f508ce15b034","Type":"ContainerStarted","Data":"3bc722fa904a377f888d8ac19616fd2cbdb092ca879abe4894a90b035bea1061"} Mar 07 08:24:53 crc kubenswrapper[4761]: I0307 08:24:53.570347 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" podStartSLOduration=2.10574175 podStartE2EDuration="2.570328818s" podCreationTimestamp="2026-03-07 08:24:51 +0000 UTC" firstStartedPulling="2026-03-07 08:24:52.53327611 +0000 UTC m=+2149.442442585" lastFinishedPulling="2026-03-07 08:24:52.997863168 +0000 UTC m=+2149.907029653" observedRunningTime="2026-03-07 08:24:53.562358089 +0000 UTC m=+2150.471524564" watchObservedRunningTime="2026-03-07 08:24:53.570328818 +0000 UTC m=+2150.479495293" Mar 07 08:25:18 crc kubenswrapper[4761]: I0307 08:25:18.152520 4761 scope.go:117] "RemoveContainer" containerID="42e5660165444ca6df91dbb38ff4e23b3096c7787fc5e04b8ca5bb536be08a99" Mar 07 08:25:18 crc kubenswrapper[4761]: I0307 08:25:18.190371 4761 scope.go:117] "RemoveContainer" containerID="00517bee769197b1cd470a476b898df7ad9f81d3ab127b1e7dddf7ed79e2908b" Mar 07 08:25:18 crc kubenswrapper[4761]: I0307 08:25:18.289709 4761 scope.go:117] "RemoveContainer" containerID="0110377348c876298de2a975b96c3aa38816fbe347c2c173440876eba190ce3d" Mar 07 08:25:18 crc kubenswrapper[4761]: I0307 08:25:18.331235 4761 scope.go:117] "RemoveContainer" containerID="9c1e1a06fc0e08cdc250961ed8e0100243d00ee6fe2c789c7e55aa8258d1d22e" Mar 07 08:25:18 crc kubenswrapper[4761]: I0307 08:25:18.387310 4761 scope.go:117] "RemoveContainer" containerID="d5cb7aba8024010ea4f617e523acc80542873eaac8bf9f18735f631a8f629246" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.058247 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-rrf49"] Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.067940 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-rrf49"] Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.113201 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rvtpp"] Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.116348 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.128443 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvtpp"] Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.158294 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-catalog-content\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.158356 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bksts\" (UniqueName: \"kubernetes.io/projected/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-kube-api-access-bksts\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.158398 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-utilities\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.260475 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-catalog-content\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.260569 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bksts\" (UniqueName: \"kubernetes.io/projected/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-kube-api-access-bksts\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.260623 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-utilities\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.261011 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-catalog-content\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.261274 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-utilities\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.287281 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bksts\" (UniqueName: \"kubernetes.io/projected/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-kube-api-access-bksts\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.445464 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.725428 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a46a5d-0880-4af9-a48f-3599f8b1dea7" path="/var/lib/kubelet/pods/a7a46a5d-0880-4af9-a48f-3599f8b1dea7/volumes" Mar 07 08:25:30 crc kubenswrapper[4761]: I0307 08:25:29.999868 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvtpp"] Mar 07 08:25:31 crc kubenswrapper[4761]: I0307 08:25:31.031535 4761 generic.go:334] "Generic (PLEG): container finished" podID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerID="5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d" exitCode=0 Mar 07 08:25:31 crc kubenswrapper[4761]: I0307 08:25:31.031626 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvtpp" event={"ID":"0a455673-cdb5-44f0-ac3b-0b23918ef4f6","Type":"ContainerDied","Data":"5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d"} Mar 07 08:25:31 crc kubenswrapper[4761]: I0307 08:25:31.031989 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvtpp" event={"ID":"0a455673-cdb5-44f0-ac3b-0b23918ef4f6","Type":"ContainerStarted","Data":"272c533ffcd1480e21fa1b176a94fa62f045c932130f794e5a1fdeb5af6778cf"} Mar 07 08:25:32 crc kubenswrapper[4761]: I0307 08:25:32.053603 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvtpp" event={"ID":"0a455673-cdb5-44f0-ac3b-0b23918ef4f6","Type":"ContainerStarted","Data":"2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165"} Mar 07 08:25:33 crc kubenswrapper[4761]: I0307 08:25:33.067308 4761 generic.go:334] "Generic (PLEG): container finished" podID="0e1e8856-bbd9-4931-af28-f508ce15b034" containerID="75df134065de33e631664fbb28c486be6cdd6127864839bb0554d8d6e6293a37" exitCode=0 Mar 07 08:25:33 crc kubenswrapper[4761]: I0307 08:25:33.067409 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" event={"ID":"0e1e8856-bbd9-4931-af28-f508ce15b034","Type":"ContainerDied","Data":"75df134065de33e631664fbb28c486be6cdd6127864839bb0554d8d6e6293a37"} Mar 07 08:25:34 crc kubenswrapper[4761]: I0307 08:25:34.779358 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:25:34 crc kubenswrapper[4761]: I0307 08:25:34.915382 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbbhc\" (UniqueName: \"kubernetes.io/projected/0e1e8856-bbd9-4931-af28-f508ce15b034-kube-api-access-jbbhc\") pod \"0e1e8856-bbd9-4931-af28-f508ce15b034\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " Mar 07 08:25:34 crc kubenswrapper[4761]: I0307 08:25:34.915463 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-inventory\") pod \"0e1e8856-bbd9-4931-af28-f508ce15b034\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " Mar 07 08:25:34 crc kubenswrapper[4761]: I0307 08:25:34.916428 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-ssh-key-openstack-edpm-ipam\") pod \"0e1e8856-bbd9-4931-af28-f508ce15b034\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " Mar 07 08:25:34 crc kubenswrapper[4761]: I0307 08:25:34.920818 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e1e8856-bbd9-4931-af28-f508ce15b034-kube-api-access-jbbhc" (OuterVolumeSpecName: "kube-api-access-jbbhc") pod "0e1e8856-bbd9-4931-af28-f508ce15b034" (UID: "0e1e8856-bbd9-4931-af28-f508ce15b034"). InnerVolumeSpecName "kube-api-access-jbbhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:25:34 crc kubenswrapper[4761]: I0307 08:25:34.945382 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0e1e8856-bbd9-4931-af28-f508ce15b034" (UID: "0e1e8856-bbd9-4931-af28-f508ce15b034"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:25:34 crc kubenswrapper[4761]: I0307 08:25:34.948803 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-inventory" (OuterVolumeSpecName: "inventory") pod "0e1e8856-bbd9-4931-af28-f508ce15b034" (UID: "0e1e8856-bbd9-4931-af28-f508ce15b034"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.022950 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbbhc\" (UniqueName: \"kubernetes.io/projected/0e1e8856-bbd9-4931-af28-f508ce15b034-kube-api-access-jbbhc\") on node \"crc\" DevicePath \"\"" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.023007 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.023028 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.100006 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" event={"ID":"0e1e8856-bbd9-4931-af28-f508ce15b034","Type":"ContainerDied","Data":"3bc722fa904a377f888d8ac19616fd2cbdb092ca879abe4894a90b035bea1061"} Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.100247 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bc722fa904a377f888d8ac19616fd2cbdb092ca879abe4894a90b035bea1061" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.100083 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.189092 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh"] Mar 07 08:25:35 crc kubenswrapper[4761]: E0307 08:25:35.189602 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1e8856-bbd9-4931-af28-f508ce15b034" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.189620 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1e8856-bbd9-4931-af28-f508ce15b034" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.189902 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1e8856-bbd9-4931-af28-f508ce15b034" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.190792 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.192774 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.193014 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.193400 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.193632 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.209706 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh"] Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.336435 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.336631 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.336949 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlvvf\" (UniqueName: \"kubernetes.io/projected/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-kube-api-access-hlvvf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.438870 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.439032 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlvvf\" (UniqueName: \"kubernetes.io/projected/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-kube-api-access-hlvvf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.439180 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.443564 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.450059 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.456473 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlvvf\" (UniqueName: \"kubernetes.io/projected/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-kube-api-access-hlvvf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.511390 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:36 crc kubenswrapper[4761]: I0307 08:25:36.104005 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh"] Mar 07 08:25:37 crc kubenswrapper[4761]: I0307 08:25:37.128225 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" event={"ID":"0e72d6d8-c8fb-4093-9395-c3de682b7aa9","Type":"ContainerStarted","Data":"68077264f55a20414b119f09c96eb4f8b0a42eecf2388e2349c247d9d3c08a0f"} Mar 07 08:25:37 crc kubenswrapper[4761]: I0307 08:25:37.128966 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" event={"ID":"0e72d6d8-c8fb-4093-9395-c3de682b7aa9","Type":"ContainerStarted","Data":"644302c61a4f5d4dfd273d575bf9c9cb42f1c0016c2e6fe51679e7f2e76defcc"} Mar 07 08:25:37 crc kubenswrapper[4761]: I0307 08:25:37.151982 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" podStartSLOduration=1.5456665630000002 podStartE2EDuration="2.15196461s" podCreationTimestamp="2026-03-07 08:25:35 +0000 UTC" firstStartedPulling="2026-03-07 08:25:36.113465066 +0000 UTC m=+2193.022631541" lastFinishedPulling="2026-03-07 08:25:36.719763083 +0000 UTC m=+2193.628929588" observedRunningTime="2026-03-07 08:25:37.147919049 +0000 UTC m=+2194.057085534" watchObservedRunningTime="2026-03-07 08:25:37.15196461 +0000 UTC m=+2194.061131085" Mar 07 08:25:39 crc kubenswrapper[4761]: I0307 08:25:39.190518 4761 generic.go:334] "Generic (PLEG): container finished" podID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerID="2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165" exitCode=0 Mar 07 08:25:39 crc kubenswrapper[4761]: I0307 08:25:39.190615 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvtpp" event={"ID":"0a455673-cdb5-44f0-ac3b-0b23918ef4f6","Type":"ContainerDied","Data":"2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165"} Mar 07 08:25:40 crc kubenswrapper[4761]: I0307 08:25:40.203016 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvtpp" event={"ID":"0a455673-cdb5-44f0-ac3b-0b23918ef4f6","Type":"ContainerStarted","Data":"7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2"} Mar 07 08:25:40 crc kubenswrapper[4761]: I0307 08:25:40.230281 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rvtpp" podStartSLOduration=2.6636809 podStartE2EDuration="11.23025367s" podCreationTimestamp="2026-03-07 08:25:29 +0000 UTC" firstStartedPulling="2026-03-07 08:25:31.035388332 +0000 UTC m=+2187.944554847" lastFinishedPulling="2026-03-07 08:25:39.601961132 +0000 UTC m=+2196.511127617" observedRunningTime="2026-03-07 08:25:40.224993809 +0000 UTC m=+2197.134160294" watchObservedRunningTime="2026-03-07 08:25:40.23025367 +0000 UTC m=+2197.139420185" Mar 07 08:25:49 crc kubenswrapper[4761]: I0307 08:25:49.445935 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:49 crc kubenswrapper[4761]: I0307 08:25:49.446341 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:50 crc kubenswrapper[4761]: I0307 08:25:50.500500 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rvtpp" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="registry-server" probeResult="failure" output=< Mar 07 08:25:50 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:25:50 crc kubenswrapper[4761]: > Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.158177 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547866-56rlk"] Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.180442 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547866-56rlk" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.185194 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.186074 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.186832 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.222365 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547866-56rlk"] Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.310130 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwjvt\" (UniqueName: \"kubernetes.io/projected/fa60f65d-1134-4cac-bd66-fd5a70f064d0-kube-api-access-bwjvt\") pod \"auto-csr-approver-29547866-56rlk\" (UID: \"fa60f65d-1134-4cac-bd66-fd5a70f064d0\") " pod="openshift-infra/auto-csr-approver-29547866-56rlk" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.412729 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwjvt\" (UniqueName: \"kubernetes.io/projected/fa60f65d-1134-4cac-bd66-fd5a70f064d0-kube-api-access-bwjvt\") pod \"auto-csr-approver-29547866-56rlk\" (UID: \"fa60f65d-1134-4cac-bd66-fd5a70f064d0\") " pod="openshift-infra/auto-csr-approver-29547866-56rlk" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.431932 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwjvt\" (UniqueName: \"kubernetes.io/projected/fa60f65d-1134-4cac-bd66-fd5a70f064d0-kube-api-access-bwjvt\") pod \"auto-csr-approver-29547866-56rlk\" (UID: \"fa60f65d-1134-4cac-bd66-fd5a70f064d0\") " pod="openshift-infra/auto-csr-approver-29547866-56rlk" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.516814 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547866-56rlk" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.534567 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rvtpp" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="registry-server" probeResult="failure" output=< Mar 07 08:26:00 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:26:00 crc kubenswrapper[4761]: > Mar 07 08:26:01 crc kubenswrapper[4761]: W0307 08:26:01.083550 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa60f65d_1134_4cac_bd66_fd5a70f064d0.slice/crio-b92e3f9a9d24c54d34694c00735c4106a7168fc92857b1d5e062992d280908a6 WatchSource:0}: Error finding container b92e3f9a9d24c54d34694c00735c4106a7168fc92857b1d5e062992d280908a6: Status 404 returned error can't find the container with id b92e3f9a9d24c54d34694c00735c4106a7168fc92857b1d5e062992d280908a6 Mar 07 08:26:01 crc kubenswrapper[4761]: I0307 08:26:01.083649 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547866-56rlk"] Mar 07 08:26:01 crc kubenswrapper[4761]: I0307 08:26:01.453513 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547866-56rlk" event={"ID":"fa60f65d-1134-4cac-bd66-fd5a70f064d0","Type":"ContainerStarted","Data":"b92e3f9a9d24c54d34694c00735c4106a7168fc92857b1d5e062992d280908a6"} Mar 07 08:26:02 crc kubenswrapper[4761]: I0307 08:26:02.469866 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547866-56rlk" event={"ID":"fa60f65d-1134-4cac-bd66-fd5a70f064d0","Type":"ContainerStarted","Data":"f7ea310432b36a6153cf31887d1ac9f40396d49116da350bd0b549363b4e3af6"} Mar 07 08:26:02 crc kubenswrapper[4761]: I0307 08:26:02.499895 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547866-56rlk" podStartSLOduration=1.636209296 podStartE2EDuration="2.499873671s" podCreationTimestamp="2026-03-07 08:26:00 +0000 UTC" firstStartedPulling="2026-03-07 08:26:01.086833814 +0000 UTC m=+2217.996000299" lastFinishedPulling="2026-03-07 08:26:01.950498189 +0000 UTC m=+2218.859664674" observedRunningTime="2026-03-07 08:26:02.487675945 +0000 UTC m=+2219.396842440" watchObservedRunningTime="2026-03-07 08:26:02.499873671 +0000 UTC m=+2219.409040146" Mar 07 08:26:03 crc kubenswrapper[4761]: I0307 08:26:03.483272 4761 generic.go:334] "Generic (PLEG): container finished" podID="fa60f65d-1134-4cac-bd66-fd5a70f064d0" containerID="f7ea310432b36a6153cf31887d1ac9f40396d49116da350bd0b549363b4e3af6" exitCode=0 Mar 07 08:26:03 crc kubenswrapper[4761]: I0307 08:26:03.483360 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547866-56rlk" event={"ID":"fa60f65d-1134-4cac-bd66-fd5a70f064d0","Type":"ContainerDied","Data":"f7ea310432b36a6153cf31887d1ac9f40396d49116da350bd0b549363b4e3af6"} Mar 07 08:26:04 crc kubenswrapper[4761]: I0307 08:26:04.910928 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547866-56rlk" Mar 07 08:26:04 crc kubenswrapper[4761]: I0307 08:26:04.946759 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwjvt\" (UniqueName: \"kubernetes.io/projected/fa60f65d-1134-4cac-bd66-fd5a70f064d0-kube-api-access-bwjvt\") pod \"fa60f65d-1134-4cac-bd66-fd5a70f064d0\" (UID: \"fa60f65d-1134-4cac-bd66-fd5a70f064d0\") " Mar 07 08:26:04 crc kubenswrapper[4761]: I0307 08:26:04.956049 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa60f65d-1134-4cac-bd66-fd5a70f064d0-kube-api-access-bwjvt" (OuterVolumeSpecName: "kube-api-access-bwjvt") pod "fa60f65d-1134-4cac-bd66-fd5a70f064d0" (UID: "fa60f65d-1134-4cac-bd66-fd5a70f064d0"). InnerVolumeSpecName "kube-api-access-bwjvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:26:05 crc kubenswrapper[4761]: I0307 08:26:05.050097 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwjvt\" (UniqueName: \"kubernetes.io/projected/fa60f65d-1134-4cac-bd66-fd5a70f064d0-kube-api-access-bwjvt\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:05 crc kubenswrapper[4761]: I0307 08:26:05.514898 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547866-56rlk" event={"ID":"fa60f65d-1134-4cac-bd66-fd5a70f064d0","Type":"ContainerDied","Data":"b92e3f9a9d24c54d34694c00735c4106a7168fc92857b1d5e062992d280908a6"} Mar 07 08:26:05 crc kubenswrapper[4761]: I0307 08:26:05.515543 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b92e3f9a9d24c54d34694c00735c4106a7168fc92857b1d5e062992d280908a6" Mar 07 08:26:05 crc kubenswrapper[4761]: I0307 08:26:05.515051 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547866-56rlk" Mar 07 08:26:05 crc kubenswrapper[4761]: I0307 08:26:05.596138 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547860-d6dm6"] Mar 07 08:26:05 crc kubenswrapper[4761]: I0307 08:26:05.614997 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547860-d6dm6"] Mar 07 08:26:05 crc kubenswrapper[4761]: I0307 08:26:05.721366 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffa0ab32-8233-4b87-b335-eb94efbdfb06" path="/var/lib/kubelet/pods/ffa0ab32-8233-4b87-b335-eb94efbdfb06/volumes" Mar 07 08:26:09 crc kubenswrapper[4761]: I0307 08:26:09.529869 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:26:09 crc kubenswrapper[4761]: I0307 08:26:09.609565 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:26:09 crc kubenswrapper[4761]: I0307 08:26:09.793218 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvtpp"] Mar 07 08:26:10 crc kubenswrapper[4761]: I0307 08:26:10.586631 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rvtpp" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="registry-server" containerID="cri-o://7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2" gracePeriod=2 Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.143937 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.244585 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-utilities\") pod \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.244804 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bksts\" (UniqueName: \"kubernetes.io/projected/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-kube-api-access-bksts\") pod \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.245998 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-catalog-content\") pod \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.246305 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-utilities" (OuterVolumeSpecName: "utilities") pod "0a455673-cdb5-44f0-ac3b-0b23918ef4f6" (UID: "0a455673-cdb5-44f0-ac3b-0b23918ef4f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.247684 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.253900 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-kube-api-access-bksts" (OuterVolumeSpecName: "kube-api-access-bksts") pod "0a455673-cdb5-44f0-ac3b-0b23918ef4f6" (UID: "0a455673-cdb5-44f0-ac3b-0b23918ef4f6"). InnerVolumeSpecName "kube-api-access-bksts". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.350651 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bksts\" (UniqueName: \"kubernetes.io/projected/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-kube-api-access-bksts\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.362577 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a455673-cdb5-44f0-ac3b-0b23918ef4f6" (UID: "0a455673-cdb5-44f0-ac3b-0b23918ef4f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.451903 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.601402 4761 generic.go:334] "Generic (PLEG): container finished" podID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerID="7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2" exitCode=0 Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.601451 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvtpp" event={"ID":"0a455673-cdb5-44f0-ac3b-0b23918ef4f6","Type":"ContainerDied","Data":"7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2"} Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.601480 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvtpp" event={"ID":"0a455673-cdb5-44f0-ac3b-0b23918ef4f6","Type":"ContainerDied","Data":"272c533ffcd1480e21fa1b176a94fa62f045c932130f794e5a1fdeb5af6778cf"} Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.601500 4761 scope.go:117] "RemoveContainer" containerID="7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.601675 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.640611 4761 scope.go:117] "RemoveContainer" containerID="2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.662781 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvtpp"] Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.671463 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rvtpp"] Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.673659 4761 scope.go:117] "RemoveContainer" containerID="5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.731161 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" path="/var/lib/kubelet/pods/0a455673-cdb5-44f0-ac3b-0b23918ef4f6/volumes" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.740103 4761 scope.go:117] "RemoveContainer" containerID="7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2" Mar 07 08:26:11 crc kubenswrapper[4761]: E0307 08:26:11.740620 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2\": container with ID starting with 7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2 not found: ID does not exist" containerID="7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.740672 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2"} err="failed to get container status \"7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2\": rpc error: code = NotFound desc = could not find container \"7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2\": container with ID starting with 7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2 not found: ID does not exist" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.740705 4761 scope.go:117] "RemoveContainer" containerID="2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165" Mar 07 08:26:11 crc kubenswrapper[4761]: E0307 08:26:11.741126 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165\": container with ID starting with 2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165 not found: ID does not exist" containerID="2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.741165 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165"} err="failed to get container status \"2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165\": rpc error: code = NotFound desc = could not find container \"2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165\": container with ID starting with 2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165 not found: ID does not exist" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.741188 4761 scope.go:117] "RemoveContainer" containerID="5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d" Mar 07 08:26:11 crc kubenswrapper[4761]: E0307 08:26:11.741634 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d\": container with ID starting with 5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d not found: ID does not exist" containerID="5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.741677 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d"} err="failed to get container status \"5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d\": rpc error: code = NotFound desc = could not find container \"5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d\": container with ID starting with 5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d not found: ID does not exist" Mar 07 08:26:13 crc kubenswrapper[4761]: I0307 08:26:13.768407 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:26:13 crc kubenswrapper[4761]: I0307 08:26:13.768802 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:26:18 crc kubenswrapper[4761]: I0307 08:26:18.570436 4761 scope.go:117] "RemoveContainer" containerID="10078fb6c1a8e617ee923e1cee93a96c671ff14c92c3f2b495d63428c1465950" Mar 07 08:26:18 crc kubenswrapper[4761]: I0307 08:26:18.632216 4761 scope.go:117] "RemoveContainer" containerID="e04c2b95dad8241d3b28cfd6ddafa5597a39cc35f1df65e0fd0a09feec72001e" Mar 07 08:26:26 crc kubenswrapper[4761]: I0307 08:26:26.878319 4761 generic.go:334] "Generic (PLEG): container finished" podID="0e72d6d8-c8fb-4093-9395-c3de682b7aa9" containerID="68077264f55a20414b119f09c96eb4f8b0a42eecf2388e2349c247d9d3c08a0f" exitCode=0 Mar 07 08:26:26 crc kubenswrapper[4761]: I0307 08:26:26.878489 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" event={"ID":"0e72d6d8-c8fb-4093-9395-c3de682b7aa9","Type":"ContainerDied","Data":"68077264f55a20414b119f09c96eb4f8b0a42eecf2388e2349c247d9d3c08a0f"} Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.455800 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.543967 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-inventory\") pod \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.544127 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-ssh-key-openstack-edpm-ipam\") pod \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.544362 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlvvf\" (UniqueName: \"kubernetes.io/projected/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-kube-api-access-hlvvf\") pod \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.575052 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-kube-api-access-hlvvf" (OuterVolumeSpecName: "kube-api-access-hlvvf") pod "0e72d6d8-c8fb-4093-9395-c3de682b7aa9" (UID: "0e72d6d8-c8fb-4093-9395-c3de682b7aa9"). InnerVolumeSpecName "kube-api-access-hlvvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.581871 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0e72d6d8-c8fb-4093-9395-c3de682b7aa9" (UID: "0e72d6d8-c8fb-4093-9395-c3de682b7aa9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.586474 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-inventory" (OuterVolumeSpecName: "inventory") pod "0e72d6d8-c8fb-4093-9395-c3de682b7aa9" (UID: "0e72d6d8-c8fb-4093-9395-c3de682b7aa9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.647037 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.647071 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.647086 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlvvf\" (UniqueName: \"kubernetes.io/projected/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-kube-api-access-hlvvf\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.905838 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" event={"ID":"0e72d6d8-c8fb-4093-9395-c3de682b7aa9","Type":"ContainerDied","Data":"644302c61a4f5d4dfd273d575bf9c9cb42f1c0016c2e6fe51679e7f2e76defcc"} Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.906093 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="644302c61a4f5d4dfd273d575bf9c9cb42f1c0016c2e6fe51679e7f2e76defcc" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.906206 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.043564 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hvs2h"] Mar 07 08:26:29 crc kubenswrapper[4761]: E0307 08:26:29.043989 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa60f65d-1134-4cac-bd66-fd5a70f064d0" containerName="oc" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044004 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa60f65d-1134-4cac-bd66-fd5a70f064d0" containerName="oc" Mar 07 08:26:29 crc kubenswrapper[4761]: E0307 08:26:29.044015 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="registry-server" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044022 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="registry-server" Mar 07 08:26:29 crc kubenswrapper[4761]: E0307 08:26:29.044053 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e72d6d8-c8fb-4093-9395-c3de682b7aa9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044060 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e72d6d8-c8fb-4093-9395-c3de682b7aa9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:26:29 crc kubenswrapper[4761]: E0307 08:26:29.044077 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="extract-content" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044083 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="extract-content" Mar 07 08:26:29 crc kubenswrapper[4761]: E0307 08:26:29.044106 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="extract-utilities" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044115 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="extract-utilities" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044306 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa60f65d-1134-4cac-bd66-fd5a70f064d0" containerName="oc" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044326 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e72d6d8-c8fb-4093-9395-c3de682b7aa9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044347 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="registry-server" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.045171 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.057216 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.057341 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.057363 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.058148 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.080904 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hvs2h"] Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.160378 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.160735 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.160837 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjdbt\" (UniqueName: \"kubernetes.io/projected/c64904be-c7ab-4389-8efc-1fa8d0b25c20-kube-api-access-mjdbt\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.264060 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjdbt\" (UniqueName: \"kubernetes.io/projected/c64904be-c7ab-4389-8efc-1fa8d0b25c20-kube-api-access-mjdbt\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.264214 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.264444 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.283695 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.283878 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.288256 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjdbt\" (UniqueName: \"kubernetes.io/projected/c64904be-c7ab-4389-8efc-1fa8d0b25c20-kube-api-access-mjdbt\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.381469 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:30 crc kubenswrapper[4761]: I0307 08:26:30.002349 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hvs2h"] Mar 07 08:26:30 crc kubenswrapper[4761]: I0307 08:26:30.932499 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" event={"ID":"c64904be-c7ab-4389-8efc-1fa8d0b25c20","Type":"ContainerStarted","Data":"c853827a4dfa9035e4431d7591774aea401e3ca72b04a68ebf3a56802d085c8e"} Mar 07 08:26:30 crc kubenswrapper[4761]: I0307 08:26:30.934335 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" event={"ID":"c64904be-c7ab-4389-8efc-1fa8d0b25c20","Type":"ContainerStarted","Data":"c205d807aa7612777fe5110bb2c707686e9ad7396d7f5ae62032d50bbfe96279"} Mar 07 08:26:30 crc kubenswrapper[4761]: I0307 08:26:30.959778 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" podStartSLOduration=1.5055558279999999 podStartE2EDuration="1.959759845s" podCreationTimestamp="2026-03-07 08:26:29 +0000 UTC" firstStartedPulling="2026-03-07 08:26:30.008858365 +0000 UTC m=+2246.918024840" lastFinishedPulling="2026-03-07 08:26:30.463062382 +0000 UTC m=+2247.372228857" observedRunningTime="2026-03-07 08:26:30.94680547 +0000 UTC m=+2247.855971955" watchObservedRunningTime="2026-03-07 08:26:30.959759845 +0000 UTC m=+2247.868926320" Mar 07 08:26:38 crc kubenswrapper[4761]: I0307 08:26:38.014559 4761 generic.go:334] "Generic (PLEG): container finished" podID="c64904be-c7ab-4389-8efc-1fa8d0b25c20" containerID="c853827a4dfa9035e4431d7591774aea401e3ca72b04a68ebf3a56802d085c8e" exitCode=0 Mar 07 08:26:38 crc kubenswrapper[4761]: I0307 08:26:38.014650 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" event={"ID":"c64904be-c7ab-4389-8efc-1fa8d0b25c20","Type":"ContainerDied","Data":"c853827a4dfa9035e4431d7591774aea401e3ca72b04a68ebf3a56802d085c8e"} Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.563068 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.638292 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-inventory-0\") pod \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.638581 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-ssh-key-openstack-edpm-ipam\") pod \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.638949 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjdbt\" (UniqueName: \"kubernetes.io/projected/c64904be-c7ab-4389-8efc-1fa8d0b25c20-kube-api-access-mjdbt\") pod \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.653541 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c64904be-c7ab-4389-8efc-1fa8d0b25c20-kube-api-access-mjdbt" (OuterVolumeSpecName: "kube-api-access-mjdbt") pod "c64904be-c7ab-4389-8efc-1fa8d0b25c20" (UID: "c64904be-c7ab-4389-8efc-1fa8d0b25c20"). InnerVolumeSpecName "kube-api-access-mjdbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.676271 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c64904be-c7ab-4389-8efc-1fa8d0b25c20" (UID: "c64904be-c7ab-4389-8efc-1fa8d0b25c20"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.693525 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c64904be-c7ab-4389-8efc-1fa8d0b25c20" (UID: "c64904be-c7ab-4389-8efc-1fa8d0b25c20"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.746786 4761 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.746833 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.746848 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjdbt\" (UniqueName: \"kubernetes.io/projected/c64904be-c7ab-4389-8efc-1fa8d0b25c20-kube-api-access-mjdbt\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.041154 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" event={"ID":"c64904be-c7ab-4389-8efc-1fa8d0b25c20","Type":"ContainerDied","Data":"c205d807aa7612777fe5110bb2c707686e9ad7396d7f5ae62032d50bbfe96279"} Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.041232 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c205d807aa7612777fe5110bb2c707686e9ad7396d7f5ae62032d50bbfe96279" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.041250 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.182940 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6"] Mar 07 08:26:40 crc kubenswrapper[4761]: E0307 08:26:40.184068 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64904be-c7ab-4389-8efc-1fa8d0b25c20" containerName="ssh-known-hosts-edpm-deployment" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.184098 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64904be-c7ab-4389-8efc-1fa8d0b25c20" containerName="ssh-known-hosts-edpm-deployment" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.184427 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c64904be-c7ab-4389-8efc-1fa8d0b25c20" containerName="ssh-known-hosts-edpm-deployment" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.185502 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.190223 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.191172 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.191322 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.191521 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.197617 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6"] Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.366934 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.367039 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw6wl\" (UniqueName: \"kubernetes.io/projected/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-kube-api-access-nw6wl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.367199 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.469870 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.470005 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.470095 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw6wl\" (UniqueName: \"kubernetes.io/projected/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-kube-api-access-nw6wl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.475899 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.478507 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.497931 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw6wl\" (UniqueName: \"kubernetes.io/projected/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-kube-api-access-nw6wl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.564860 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:41 crc kubenswrapper[4761]: I0307 08:26:41.185702 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6"] Mar 07 08:26:42 crc kubenswrapper[4761]: I0307 08:26:42.070194 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" event={"ID":"bff456cc-066d-4ffe-a805-cd7a82d7d6e1","Type":"ContainerStarted","Data":"5f64a21c25786ca3e3975de5d7d9f6f0cc7efff639fbf548991a0dc7389c9be9"} Mar 07 08:26:42 crc kubenswrapper[4761]: I0307 08:26:42.070845 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" event={"ID":"bff456cc-066d-4ffe-a805-cd7a82d7d6e1","Type":"ContainerStarted","Data":"97e1bba5d79a7f538ef7d699cde59dc8703ca445b90ec9a25af6d80739c2f4c7"} Mar 07 08:26:42 crc kubenswrapper[4761]: I0307 08:26:42.099422 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" podStartSLOduration=1.6980880969999999 podStartE2EDuration="2.09939148s" podCreationTimestamp="2026-03-07 08:26:40 +0000 UTC" firstStartedPulling="2026-03-07 08:26:41.18602125 +0000 UTC m=+2258.095187735" lastFinishedPulling="2026-03-07 08:26:41.587324603 +0000 UTC m=+2258.496491118" observedRunningTime="2026-03-07 08:26:42.08743108 +0000 UTC m=+2258.996597595" watchObservedRunningTime="2026-03-07 08:26:42.09939148 +0000 UTC m=+2259.008557995" Mar 07 08:26:43 crc kubenswrapper[4761]: I0307 08:26:43.768574 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:26:43 crc kubenswrapper[4761]: I0307 08:26:43.768633 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:26:50 crc kubenswrapper[4761]: I0307 08:26:50.205369 4761 generic.go:334] "Generic (PLEG): container finished" podID="bff456cc-066d-4ffe-a805-cd7a82d7d6e1" containerID="5f64a21c25786ca3e3975de5d7d9f6f0cc7efff639fbf548991a0dc7389c9be9" exitCode=0 Mar 07 08:26:50 crc kubenswrapper[4761]: I0307 08:26:50.205432 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" event={"ID":"bff456cc-066d-4ffe-a805-cd7a82d7d6e1","Type":"ContainerDied","Data":"5f64a21c25786ca3e3975de5d7d9f6f0cc7efff639fbf548991a0dc7389c9be9"} Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.760184 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.782576 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-ssh-key-openstack-edpm-ipam\") pod \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.782824 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw6wl\" (UniqueName: \"kubernetes.io/projected/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-kube-api-access-nw6wl\") pod \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.783046 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-inventory\") pod \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.788838 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-kube-api-access-nw6wl" (OuterVolumeSpecName: "kube-api-access-nw6wl") pod "bff456cc-066d-4ffe-a805-cd7a82d7d6e1" (UID: "bff456cc-066d-4ffe-a805-cd7a82d7d6e1"). InnerVolumeSpecName "kube-api-access-nw6wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.834490 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bff456cc-066d-4ffe-a805-cd7a82d7d6e1" (UID: "bff456cc-066d-4ffe-a805-cd7a82d7d6e1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.837778 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-inventory" (OuterVolumeSpecName: "inventory") pod "bff456cc-066d-4ffe-a805-cd7a82d7d6e1" (UID: "bff456cc-066d-4ffe-a805-cd7a82d7d6e1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.886657 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.886708 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.886752 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw6wl\" (UniqueName: \"kubernetes.io/projected/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-kube-api-access-nw6wl\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.246895 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" event={"ID":"bff456cc-066d-4ffe-a805-cd7a82d7d6e1","Type":"ContainerDied","Data":"97e1bba5d79a7f538ef7d699cde59dc8703ca445b90ec9a25af6d80739c2f4c7"} Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.247463 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97e1bba5d79a7f538ef7d699cde59dc8703ca445b90ec9a25af6d80739c2f4c7" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.247046 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.325677 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h"] Mar 07 08:26:52 crc kubenswrapper[4761]: E0307 08:26:52.326151 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff456cc-066d-4ffe-a805-cd7a82d7d6e1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.326166 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff456cc-066d-4ffe-a805-cd7a82d7d6e1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.326365 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff456cc-066d-4ffe-a805-cd7a82d7d6e1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.327096 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.330414 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.341645 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.342002 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.342254 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.361319 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h"] Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.408569 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwbs9\" (UniqueName: \"kubernetes.io/projected/3aa544e2-be60-4e2a-9d61-1634fbf51479-kube-api-access-qwbs9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.408655 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.408695 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.511298 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwbs9\" (UniqueName: \"kubernetes.io/projected/3aa544e2-be60-4e2a-9d61-1634fbf51479-kube-api-access-qwbs9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.511371 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.511407 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.518434 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.519816 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.528085 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwbs9\" (UniqueName: \"kubernetes.io/projected/3aa544e2-be60-4e2a-9d61-1634fbf51479-kube-api-access-qwbs9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.662003 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:53 crc kubenswrapper[4761]: I0307 08:26:53.263018 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h"] Mar 07 08:26:54 crc kubenswrapper[4761]: I0307 08:26:54.271620 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" event={"ID":"3aa544e2-be60-4e2a-9d61-1634fbf51479","Type":"ContainerStarted","Data":"25807cd386e3da3e148cf92f24586b61931177a09c19c8a8bf06737f10de3c58"} Mar 07 08:26:54 crc kubenswrapper[4761]: I0307 08:26:54.271957 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" event={"ID":"3aa544e2-be60-4e2a-9d61-1634fbf51479","Type":"ContainerStarted","Data":"2b6ebc814bea804d3c8a8a41a68d34d400c73785d914dee824ec3a92b7141b1d"} Mar 07 08:26:54 crc kubenswrapper[4761]: I0307 08:26:54.297461 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" podStartSLOduration=1.9001524920000001 podStartE2EDuration="2.297427303s" podCreationTimestamp="2026-03-07 08:26:52 +0000 UTC" firstStartedPulling="2026-03-07 08:26:53.263862621 +0000 UTC m=+2270.173029096" lastFinishedPulling="2026-03-07 08:26:53.661137432 +0000 UTC m=+2270.570303907" observedRunningTime="2026-03-07 08:26:54.293490165 +0000 UTC m=+2271.202656640" watchObservedRunningTime="2026-03-07 08:26:54.297427303 +0000 UTC m=+2271.206593828" Mar 07 08:27:04 crc kubenswrapper[4761]: I0307 08:27:04.409898 4761 generic.go:334] "Generic (PLEG): container finished" podID="3aa544e2-be60-4e2a-9d61-1634fbf51479" containerID="25807cd386e3da3e148cf92f24586b61931177a09c19c8a8bf06737f10de3c58" exitCode=0 Mar 07 08:27:04 crc kubenswrapper[4761]: I0307 08:27:04.410547 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" event={"ID":"3aa544e2-be60-4e2a-9d61-1634fbf51479","Type":"ContainerDied","Data":"25807cd386e3da3e148cf92f24586b61931177a09c19c8a8bf06737f10de3c58"} Mar 07 08:27:05 crc kubenswrapper[4761]: I0307 08:27:05.933045 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:27:05 crc kubenswrapper[4761]: I0307 08:27:05.990797 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-ssh-key-openstack-edpm-ipam\") pod \"3aa544e2-be60-4e2a-9d61-1634fbf51479\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " Mar 07 08:27:05 crc kubenswrapper[4761]: I0307 08:27:05.990845 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-inventory\") pod \"3aa544e2-be60-4e2a-9d61-1634fbf51479\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " Mar 07 08:27:05 crc kubenswrapper[4761]: I0307 08:27:05.990903 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwbs9\" (UniqueName: \"kubernetes.io/projected/3aa544e2-be60-4e2a-9d61-1634fbf51479-kube-api-access-qwbs9\") pod \"3aa544e2-be60-4e2a-9d61-1634fbf51479\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " Mar 07 08:27:05 crc kubenswrapper[4761]: I0307 08:27:05.997299 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa544e2-be60-4e2a-9d61-1634fbf51479-kube-api-access-qwbs9" (OuterVolumeSpecName: "kube-api-access-qwbs9") pod "3aa544e2-be60-4e2a-9d61-1634fbf51479" (UID: "3aa544e2-be60-4e2a-9d61-1634fbf51479"). InnerVolumeSpecName "kube-api-access-qwbs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.039838 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3aa544e2-be60-4e2a-9d61-1634fbf51479" (UID: "3aa544e2-be60-4e2a-9d61-1634fbf51479"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.053265 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-inventory" (OuterVolumeSpecName: "inventory") pod "3aa544e2-be60-4e2a-9d61-1634fbf51479" (UID: "3aa544e2-be60-4e2a-9d61-1634fbf51479"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.095210 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwbs9\" (UniqueName: \"kubernetes.io/projected/3aa544e2-be60-4e2a-9d61-1634fbf51479-kube-api-access-qwbs9\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.095251 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.095265 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.450188 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" event={"ID":"3aa544e2-be60-4e2a-9d61-1634fbf51479","Type":"ContainerDied","Data":"2b6ebc814bea804d3c8a8a41a68d34d400c73785d914dee824ec3a92b7141b1d"} Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.450248 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b6ebc814bea804d3c8a8a41a68d34d400c73785d914dee824ec3a92b7141b1d" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.450328 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.583408 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg"] Mar 07 08:27:06 crc kubenswrapper[4761]: E0307 08:27:06.584311 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa544e2-be60-4e2a-9d61-1634fbf51479" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.584526 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa544e2-be60-4e2a-9d61-1634fbf51479" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.584949 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa544e2-be60-4e2a-9d61-1634fbf51479" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.585999 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.591872 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.592274 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.592471 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.592687 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.592892 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.593076 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.593255 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.593434 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.595342 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.597647 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg"] Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612485 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612553 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612577 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612661 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612684 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612733 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612761 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612820 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612867 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612900 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmzvm\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-kube-api-access-dmzvm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612935 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612967 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.613000 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.613030 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.613051 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.613088 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.714829 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.714922 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715027 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715095 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715132 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715178 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715215 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715276 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715319 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715412 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715484 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715535 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmzvm\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-kube-api-access-dmzvm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715603 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715651 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715701 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.716088 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.719897 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.722737 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.722839 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.723222 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.723947 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.725847 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.725947 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.726158 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.727297 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.727519 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.728003 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.728884 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.730196 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.731347 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.731628 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.745842 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmzvm\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-kube-api-access-dmzvm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.929556 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:07 crc kubenswrapper[4761]: I0307 08:27:07.607840 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg"] Mar 07 08:27:08 crc kubenswrapper[4761]: I0307 08:27:08.479618 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" event={"ID":"927c98b8-4e9f-41dc-9faa-fef8e98a71d2","Type":"ContainerStarted","Data":"feb08e2fc96f0f04f219dad600e18c34a7bd3d6fd2fbbc0bb0b21ec9d0239503"} Mar 07 08:27:08 crc kubenswrapper[4761]: I0307 08:27:08.479998 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" event={"ID":"927c98b8-4e9f-41dc-9faa-fef8e98a71d2","Type":"ContainerStarted","Data":"24153542d8fd54dc851e501ec1412ef685b8d7f0c4c7f412968bafe2fbf845c1"} Mar 07 08:27:08 crc kubenswrapper[4761]: I0307 08:27:08.510131 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" podStartSLOduration=1.995572186 podStartE2EDuration="2.510109612s" podCreationTimestamp="2026-03-07 08:27:06 +0000 UTC" firstStartedPulling="2026-03-07 08:27:07.616133792 +0000 UTC m=+2284.525300277" lastFinishedPulling="2026-03-07 08:27:08.130671228 +0000 UTC m=+2285.039837703" observedRunningTime="2026-03-07 08:27:08.50123174 +0000 UTC m=+2285.410398255" watchObservedRunningTime="2026-03-07 08:27:08.510109612 +0000 UTC m=+2285.419276087" Mar 07 08:27:13 crc kubenswrapper[4761]: I0307 08:27:13.768578 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:27:13 crc kubenswrapper[4761]: I0307 08:27:13.769243 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:27:13 crc kubenswrapper[4761]: I0307 08:27:13.769306 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:27:13 crc kubenswrapper[4761]: I0307 08:27:13.770592 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4205e887a96e2c7dfc1520ac45c44653f6029f5d7474aa135bc6c6eb298eb9d6"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:27:13 crc kubenswrapper[4761]: I0307 08:27:13.770692 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://4205e887a96e2c7dfc1520ac45c44653f6029f5d7474aa135bc6c6eb298eb9d6" gracePeriod=600 Mar 07 08:27:14 crc kubenswrapper[4761]: I0307 08:27:14.556613 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="4205e887a96e2c7dfc1520ac45c44653f6029f5d7474aa135bc6c6eb298eb9d6" exitCode=0 Mar 07 08:27:14 crc kubenswrapper[4761]: I0307 08:27:14.556670 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"4205e887a96e2c7dfc1520ac45c44653f6029f5d7474aa135bc6c6eb298eb9d6"} Mar 07 08:27:14 crc kubenswrapper[4761]: I0307 08:27:14.557495 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba"} Mar 07 08:27:14 crc kubenswrapper[4761]: I0307 08:27:14.557562 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:27:23 crc kubenswrapper[4761]: I0307 08:27:23.068834 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-bhq7g"] Mar 07 08:27:23 crc kubenswrapper[4761]: I0307 08:27:23.079230 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-bhq7g"] Mar 07 08:27:23 crc kubenswrapper[4761]: I0307 08:27:23.737179 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f02c4d0-220b-4761-a494-7a054eef8672" path="/var/lib/kubelet/pods/7f02c4d0-220b-4761-a494-7a054eef8672/volumes" Mar 07 08:27:54 crc kubenswrapper[4761]: I0307 08:27:54.444380 4761 generic.go:334] "Generic (PLEG): container finished" podID="927c98b8-4e9f-41dc-9faa-fef8e98a71d2" containerID="feb08e2fc96f0f04f219dad600e18c34a7bd3d6fd2fbbc0bb0b21ec9d0239503" exitCode=0 Mar 07 08:27:54 crc kubenswrapper[4761]: I0307 08:27:54.444502 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" event={"ID":"927c98b8-4e9f-41dc-9faa-fef8e98a71d2","Type":"ContainerDied","Data":"feb08e2fc96f0f04f219dad600e18c34a7bd3d6fd2fbbc0bb0b21ec9d0239503"} Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.001894 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099386 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-libvirt-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099667 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099707 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099761 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099808 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ovn-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099835 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099935 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099974 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100048 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-inventory\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100089 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-neutron-metadata-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100109 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ssh-key-openstack-edpm-ipam\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100156 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-bootstrap-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100187 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-repo-setup-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100205 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmzvm\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-kube-api-access-dmzvm\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100224 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-power-monitoring-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100244 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-nova-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.106848 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.107455 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.108187 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.108235 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.108884 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.110252 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.111035 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.113415 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.113419 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.113490 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.113838 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.115320 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.121026 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-kube-api-access-dmzvm" (OuterVolumeSpecName: "kube-api-access-dmzvm") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "kube-api-access-dmzvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.130122 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.142477 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-inventory" (OuterVolumeSpecName: "inventory") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.146888 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203391 4761 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203421 4761 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203451 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmzvm\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-kube-api-access-dmzvm\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203463 4761 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203475 4761 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203487 4761 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203497 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203525 4761 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203537 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203548 4761 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203557 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203566 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203578 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203606 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203617 4761 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203627 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.469674 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" event={"ID":"927c98b8-4e9f-41dc-9faa-fef8e98a71d2","Type":"ContainerDied","Data":"24153542d8fd54dc851e501ec1412ef685b8d7f0c4c7f412968bafe2fbf845c1"} Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.469750 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24153542d8fd54dc851e501ec1412ef685b8d7f0c4c7f412968bafe2fbf845c1" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.469763 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.670416 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc"] Mar 07 08:27:56 crc kubenswrapper[4761]: E0307 08:27:56.671108 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927c98b8-4e9f-41dc-9faa-fef8e98a71d2" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.671136 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="927c98b8-4e9f-41dc-9faa-fef8e98a71d2" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.671590 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="927c98b8-4e9f-41dc-9faa-fef8e98a71d2" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.672777 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.683661 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc"] Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.701363 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.701390 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.702090 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.702089 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.702552 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.715817 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.715861 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.715916 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.716005 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.716029 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj675\" (UniqueName: \"kubernetes.io/projected/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-kube-api-access-jj675\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.818088 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.818273 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.818311 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj675\" (UniqueName: \"kubernetes.io/projected/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-kube-api-access-jj675\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.819346 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.819407 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.820633 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.824290 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.824389 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.826640 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.840265 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj675\" (UniqueName: \"kubernetes.io/projected/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-kube-api-access-jj675\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:57 crc kubenswrapper[4761]: I0307 08:27:57.017799 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:57 crc kubenswrapper[4761]: I0307 08:27:57.630510 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc"] Mar 07 08:27:58 crc kubenswrapper[4761]: I0307 08:27:58.512938 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" event={"ID":"f1b69a5f-4327-4ef7-a28d-a638e579ea5d","Type":"ContainerStarted","Data":"04ca6a19aedafe395cb2de1ac576127664535113b2fc2ad6a9c3ffeb360e8e62"} Mar 07 08:27:58 crc kubenswrapper[4761]: I0307 08:27:58.515169 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" event={"ID":"f1b69a5f-4327-4ef7-a28d-a638e579ea5d","Type":"ContainerStarted","Data":"4a2f27ddc9e28fdc97fa8ba61ae89bd6ee20adcf1a91345f5d3e767fdefc7ce8"} Mar 07 08:27:58 crc kubenswrapper[4761]: I0307 08:27:58.537373 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" podStartSLOduration=2.128023387 podStartE2EDuration="2.533692508s" podCreationTimestamp="2026-03-07 08:27:56 +0000 UTC" firstStartedPulling="2026-03-07 08:27:57.640842586 +0000 UTC m=+2334.550009061" lastFinishedPulling="2026-03-07 08:27:58.046511717 +0000 UTC m=+2334.955678182" observedRunningTime="2026-03-07 08:27:58.529000231 +0000 UTC m=+2335.438166706" watchObservedRunningTime="2026-03-07 08:27:58.533692508 +0000 UTC m=+2335.442859003" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.138057 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547868-8xrsg"] Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.139927 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547868-8xrsg" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.142901 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.143404 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.143691 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.158845 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547868-8xrsg"] Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.225400 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb279\" (UniqueName: \"kubernetes.io/projected/24a1f3f8-f795-495f-bb5a-58c9511a97f2-kube-api-access-sb279\") pod \"auto-csr-approver-29547868-8xrsg\" (UID: \"24a1f3f8-f795-495f-bb5a-58c9511a97f2\") " pod="openshift-infra/auto-csr-approver-29547868-8xrsg" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.327805 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb279\" (UniqueName: \"kubernetes.io/projected/24a1f3f8-f795-495f-bb5a-58c9511a97f2-kube-api-access-sb279\") pod \"auto-csr-approver-29547868-8xrsg\" (UID: \"24a1f3f8-f795-495f-bb5a-58c9511a97f2\") " pod="openshift-infra/auto-csr-approver-29547868-8xrsg" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.349969 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb279\" (UniqueName: \"kubernetes.io/projected/24a1f3f8-f795-495f-bb5a-58c9511a97f2-kube-api-access-sb279\") pod \"auto-csr-approver-29547868-8xrsg\" (UID: \"24a1f3f8-f795-495f-bb5a-58c9511a97f2\") " pod="openshift-infra/auto-csr-approver-29547868-8xrsg" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.463948 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547868-8xrsg" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.965017 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547868-8xrsg"] Mar 07 08:28:01 crc kubenswrapper[4761]: I0307 08:28:01.548502 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547868-8xrsg" event={"ID":"24a1f3f8-f795-495f-bb5a-58c9511a97f2","Type":"ContainerStarted","Data":"766aa768f011f18458652873d4145559e8d87d6af0309c20bde87f18f9771ed2"} Mar 07 08:28:02 crc kubenswrapper[4761]: I0307 08:28:02.576528 4761 generic.go:334] "Generic (PLEG): container finished" podID="24a1f3f8-f795-495f-bb5a-58c9511a97f2" containerID="90c37012179316620f55f1d98a9814b88c29420d9b4a62e6c8a02946fc534a5e" exitCode=0 Mar 07 08:28:02 crc kubenswrapper[4761]: I0307 08:28:02.576626 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547868-8xrsg" event={"ID":"24a1f3f8-f795-495f-bb5a-58c9511a97f2","Type":"ContainerDied","Data":"90c37012179316620f55f1d98a9814b88c29420d9b4a62e6c8a02946fc534a5e"} Mar 07 08:28:04 crc kubenswrapper[4761]: I0307 08:28:04.095738 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547868-8xrsg" Mar 07 08:28:04 crc kubenswrapper[4761]: I0307 08:28:04.127113 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb279\" (UniqueName: \"kubernetes.io/projected/24a1f3f8-f795-495f-bb5a-58c9511a97f2-kube-api-access-sb279\") pod \"24a1f3f8-f795-495f-bb5a-58c9511a97f2\" (UID: \"24a1f3f8-f795-495f-bb5a-58c9511a97f2\") " Mar 07 08:28:04 crc kubenswrapper[4761]: I0307 08:28:04.133356 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a1f3f8-f795-495f-bb5a-58c9511a97f2-kube-api-access-sb279" (OuterVolumeSpecName: "kube-api-access-sb279") pod "24a1f3f8-f795-495f-bb5a-58c9511a97f2" (UID: "24a1f3f8-f795-495f-bb5a-58c9511a97f2"). InnerVolumeSpecName "kube-api-access-sb279". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:28:04 crc kubenswrapper[4761]: I0307 08:28:04.230526 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb279\" (UniqueName: \"kubernetes.io/projected/24a1f3f8-f795-495f-bb5a-58c9511a97f2-kube-api-access-sb279\") on node \"crc\" DevicePath \"\"" Mar 07 08:28:04 crc kubenswrapper[4761]: I0307 08:28:04.603779 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547868-8xrsg" event={"ID":"24a1f3f8-f795-495f-bb5a-58c9511a97f2","Type":"ContainerDied","Data":"766aa768f011f18458652873d4145559e8d87d6af0309c20bde87f18f9771ed2"} Mar 07 08:28:04 crc kubenswrapper[4761]: I0307 08:28:04.603832 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="766aa768f011f18458652873d4145559e8d87d6af0309c20bde87f18f9771ed2" Mar 07 08:28:04 crc kubenswrapper[4761]: I0307 08:28:04.603841 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547868-8xrsg" Mar 07 08:28:05 crc kubenswrapper[4761]: I0307 08:28:05.188213 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547862-td4lg"] Mar 07 08:28:05 crc kubenswrapper[4761]: I0307 08:28:05.202275 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547862-td4lg"] Mar 07 08:28:05 crc kubenswrapper[4761]: I0307 08:28:05.726443 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="256bcb0e-2dae-4547-a0d9-5f9545732bc7" path="/var/lib/kubelet/pods/256bcb0e-2dae-4547-a0d9-5f9545732bc7/volumes" Mar 07 08:28:18 crc kubenswrapper[4761]: I0307 08:28:18.818086 4761 scope.go:117] "RemoveContainer" containerID="79627cce3b9c042cfb01ec6002981e9c1693a4df041409e9c65c779592c48701" Mar 07 08:28:18 crc kubenswrapper[4761]: I0307 08:28:18.865898 4761 scope.go:117] "RemoveContainer" containerID="548972e02784866505e9c24ffd4b574561fc0ad963d71d809b954ff28861a93e" Mar 07 08:28:23 crc kubenswrapper[4761]: I0307 08:28:23.054816 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-zwc7j"] Mar 07 08:28:23 crc kubenswrapper[4761]: I0307 08:28:23.071422 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-zwc7j"] Mar 07 08:28:23 crc kubenswrapper[4761]: I0307 08:28:23.726132 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95dc33be-c55b-4068-be61-85ad0e5724d6" path="/var/lib/kubelet/pods/95dc33be-c55b-4068-be61-85ad0e5724d6/volumes" Mar 07 08:29:02 crc kubenswrapper[4761]: I0307 08:29:02.367002 4761 generic.go:334] "Generic (PLEG): container finished" podID="f1b69a5f-4327-4ef7-a28d-a638e579ea5d" containerID="04ca6a19aedafe395cb2de1ac576127664535113b2fc2ad6a9c3ffeb360e8e62" exitCode=0 Mar 07 08:29:02 crc kubenswrapper[4761]: I0307 08:29:02.367064 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" event={"ID":"f1b69a5f-4327-4ef7-a28d-a638e579ea5d","Type":"ContainerDied","Data":"04ca6a19aedafe395cb2de1ac576127664535113b2fc2ad6a9c3ffeb360e8e62"} Mar 07 08:29:03 crc kubenswrapper[4761]: I0307 08:29:03.882784 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.032956 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ssh-key-openstack-edpm-ipam\") pod \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.033012 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovn-combined-ca-bundle\") pod \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.033321 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-inventory\") pod \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.033403 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj675\" (UniqueName: \"kubernetes.io/projected/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-kube-api-access-jj675\") pod \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.033433 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovncontroller-config-0\") pod \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.037880 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-kube-api-access-jj675" (OuterVolumeSpecName: "kube-api-access-jj675") pod "f1b69a5f-4327-4ef7-a28d-a638e579ea5d" (UID: "f1b69a5f-4327-4ef7-a28d-a638e579ea5d"). InnerVolumeSpecName "kube-api-access-jj675". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.043426 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f1b69a5f-4327-4ef7-a28d-a638e579ea5d" (UID: "f1b69a5f-4327-4ef7-a28d-a638e579ea5d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.065278 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-inventory" (OuterVolumeSpecName: "inventory") pod "f1b69a5f-4327-4ef7-a28d-a638e579ea5d" (UID: "f1b69a5f-4327-4ef7-a28d-a638e579ea5d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.069441 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "f1b69a5f-4327-4ef7-a28d-a638e579ea5d" (UID: "f1b69a5f-4327-4ef7-a28d-a638e579ea5d"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.075636 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f1b69a5f-4327-4ef7-a28d-a638e579ea5d" (UID: "f1b69a5f-4327-4ef7-a28d-a638e579ea5d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.136856 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.136911 4761 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.136938 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.136970 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj675\" (UniqueName: \"kubernetes.io/projected/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-kube-api-access-jj675\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.136987 4761 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.394312 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" event={"ID":"f1b69a5f-4327-4ef7-a28d-a638e579ea5d","Type":"ContainerDied","Data":"4a2f27ddc9e28fdc97fa8ba61ae89bd6ee20adcf1a91345f5d3e767fdefc7ce8"} Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.394580 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a2f27ddc9e28fdc97fa8ba61ae89bd6ee20adcf1a91345f5d3e767fdefc7ce8" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.394401 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.580790 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6"] Mar 07 08:29:04 crc kubenswrapper[4761]: E0307 08:29:04.581471 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a1f3f8-f795-495f-bb5a-58c9511a97f2" containerName="oc" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.581498 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a1f3f8-f795-495f-bb5a-58c9511a97f2" containerName="oc" Mar 07 08:29:04 crc kubenswrapper[4761]: E0307 08:29:04.581561 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b69a5f-4327-4ef7-a28d-a638e579ea5d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.581571 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b69a5f-4327-4ef7-a28d-a638e579ea5d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.581945 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1b69a5f-4327-4ef7-a28d-a638e579ea5d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.581982 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a1f3f8-f795-495f-bb5a-58c9511a97f2" containerName="oc" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.583173 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.587353 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.587377 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.587621 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.587747 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.587973 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.588050 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.594910 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6"] Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.751891 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btxcp\" (UniqueName: \"kubernetes.io/projected/27ac2fbd-f084-4103-97aa-45c01a3aea2a-kube-api-access-btxcp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.752003 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.752226 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.752299 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.752344 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.752372 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.856529 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.857653 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.858324 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.858370 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.858519 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btxcp\" (UniqueName: \"kubernetes.io/projected/27ac2fbd-f084-4103-97aa-45c01a3aea2a-kube-api-access-btxcp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.859903 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.861568 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.862977 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.864379 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.867622 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.868239 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.887771 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btxcp\" (UniqueName: \"kubernetes.io/projected/27ac2fbd-f084-4103-97aa-45c01a3aea2a-kube-api-access-btxcp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.913776 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:05 crc kubenswrapper[4761]: I0307 08:29:05.611968 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6"] Mar 07 08:29:05 crc kubenswrapper[4761]: I0307 08:29:05.618581 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:29:06 crc kubenswrapper[4761]: I0307 08:29:06.419149 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" event={"ID":"27ac2fbd-f084-4103-97aa-45c01a3aea2a","Type":"ContainerStarted","Data":"8d5cdbf65f0239b0d77b711d6cd083f73d6797751ee039b2c47202fe105217f8"} Mar 07 08:29:06 crc kubenswrapper[4761]: I0307 08:29:06.419507 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" event={"ID":"27ac2fbd-f084-4103-97aa-45c01a3aea2a","Type":"ContainerStarted","Data":"f3918ee8bb3158ffa0e8ec8f487ac2f11c660953af91359851835599ceaa04b1"} Mar 07 08:29:06 crc kubenswrapper[4761]: I0307 08:29:06.448813 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" podStartSLOduration=2.069317113 podStartE2EDuration="2.448793679s" podCreationTimestamp="2026-03-07 08:29:04 +0000 UTC" firstStartedPulling="2026-03-07 08:29:05.618341268 +0000 UTC m=+2402.527507733" lastFinishedPulling="2026-03-07 08:29:05.997817824 +0000 UTC m=+2402.906984299" observedRunningTime="2026-03-07 08:29:06.440638175 +0000 UTC m=+2403.349804650" watchObservedRunningTime="2026-03-07 08:29:06.448793679 +0000 UTC m=+2403.357960164" Mar 07 08:29:19 crc kubenswrapper[4761]: I0307 08:29:19.040404 4761 scope.go:117] "RemoveContainer" containerID="98d4746bd821209a9116a5de380487afd770d79a6957041428405f00bc1c38f2" Mar 07 08:29:43 crc kubenswrapper[4761]: I0307 08:29:43.768814 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:29:43 crc kubenswrapper[4761]: I0307 08:29:43.769388 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:29:52 crc kubenswrapper[4761]: I0307 08:29:52.974516 4761 generic.go:334] "Generic (PLEG): container finished" podID="27ac2fbd-f084-4103-97aa-45c01a3aea2a" containerID="8d5cdbf65f0239b0d77b711d6cd083f73d6797751ee039b2c47202fe105217f8" exitCode=0 Mar 07 08:29:52 crc kubenswrapper[4761]: I0307 08:29:52.974615 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" event={"ID":"27ac2fbd-f084-4103-97aa-45c01a3aea2a","Type":"ContainerDied","Data":"8d5cdbf65f0239b0d77b711d6cd083f73d6797751ee039b2c47202fe105217f8"} Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.453590 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.566632 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btxcp\" (UniqueName: \"kubernetes.io/projected/27ac2fbd-f084-4103-97aa-45c01a3aea2a-kube-api-access-btxcp\") pod \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.566692 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-nova-metadata-neutron-config-0\") pod \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.566856 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-metadata-combined-ca-bundle\") pod \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.566912 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.566963 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-ssh-key-openstack-edpm-ipam\") pod \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.567031 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-inventory\") pod \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.572658 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "27ac2fbd-f084-4103-97aa-45c01a3aea2a" (UID: "27ac2fbd-f084-4103-97aa-45c01a3aea2a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.575136 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27ac2fbd-f084-4103-97aa-45c01a3aea2a-kube-api-access-btxcp" (OuterVolumeSpecName: "kube-api-access-btxcp") pod "27ac2fbd-f084-4103-97aa-45c01a3aea2a" (UID: "27ac2fbd-f084-4103-97aa-45c01a3aea2a"). InnerVolumeSpecName "kube-api-access-btxcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.599693 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "27ac2fbd-f084-4103-97aa-45c01a3aea2a" (UID: "27ac2fbd-f084-4103-97aa-45c01a3aea2a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.601120 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-inventory" (OuterVolumeSpecName: "inventory") pod "27ac2fbd-f084-4103-97aa-45c01a3aea2a" (UID: "27ac2fbd-f084-4103-97aa-45c01a3aea2a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.606762 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "27ac2fbd-f084-4103-97aa-45c01a3aea2a" (UID: "27ac2fbd-f084-4103-97aa-45c01a3aea2a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.621899 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "27ac2fbd-f084-4103-97aa-45c01a3aea2a" (UID: "27ac2fbd-f084-4103-97aa-45c01a3aea2a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.669787 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.669865 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btxcp\" (UniqueName: \"kubernetes.io/projected/27ac2fbd-f084-4103-97aa-45c01a3aea2a-kube-api-access-btxcp\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.669882 4761 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.669895 4761 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.669911 4761 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.669926 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.998075 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" event={"ID":"27ac2fbd-f084-4103-97aa-45c01a3aea2a","Type":"ContainerDied","Data":"f3918ee8bb3158ffa0e8ec8f487ac2f11c660953af91359851835599ceaa04b1"} Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.998136 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3918ee8bb3158ffa0e8ec8f487ac2f11c660953af91359851835599ceaa04b1" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.998216 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.110845 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687"] Mar 07 08:29:55 crc kubenswrapper[4761]: E0307 08:29:55.111680 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ac2fbd-f084-4103-97aa-45c01a3aea2a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.111818 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ac2fbd-f084-4103-97aa-45c01a3aea2a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.112181 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ac2fbd-f084-4103-97aa-45c01a3aea2a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.113383 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.118588 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.118654 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.118731 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.118592 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.119013 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.132375 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687"] Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.187247 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.187324 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvn29\" (UniqueName: \"kubernetes.io/projected/becfd5e1-5c42-4a2c-83ca-bd7f02855288-kube-api-access-gvn29\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.187509 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.187662 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.187700 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.290525 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.290961 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.290995 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.291168 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.291256 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvn29\" (UniqueName: \"kubernetes.io/projected/becfd5e1-5c42-4a2c-83ca-bd7f02855288-kube-api-access-gvn29\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.296275 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.296331 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.296656 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.296856 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.314211 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvn29\" (UniqueName: \"kubernetes.io/projected/becfd5e1-5c42-4a2c-83ca-bd7f02855288-kube-api-access-gvn29\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.498262 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:56 crc kubenswrapper[4761]: I0307 08:29:56.120968 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687"] Mar 07 08:29:57 crc kubenswrapper[4761]: I0307 08:29:57.022724 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" event={"ID":"becfd5e1-5c42-4a2c-83ca-bd7f02855288","Type":"ContainerStarted","Data":"fb06cafc75ca98eee0a01f20cb296ae25aa8b32b6410c7c9c6af7c16cfc47495"} Mar 07 08:29:58 crc kubenswrapper[4761]: I0307 08:29:58.038067 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" event={"ID":"becfd5e1-5c42-4a2c-83ca-bd7f02855288","Type":"ContainerStarted","Data":"c201185b9a0cebbe4fdd3a6708f02c737bc1bd081bdb2e01162726d4aa7b4d84"} Mar 07 08:29:58 crc kubenswrapper[4761]: I0307 08:29:58.070345 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" podStartSLOduration=2.453981178 podStartE2EDuration="3.07032313s" podCreationTimestamp="2026-03-07 08:29:55 +0000 UTC" firstStartedPulling="2026-03-07 08:29:56.129413212 +0000 UTC m=+2453.038579707" lastFinishedPulling="2026-03-07 08:29:56.745755174 +0000 UTC m=+2453.654921659" observedRunningTime="2026-03-07 08:29:58.068118334 +0000 UTC m=+2454.977284829" watchObservedRunningTime="2026-03-07 08:29:58.07032313 +0000 UTC m=+2454.979489605" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.161035 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547870-29rm2"] Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.163904 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547870-29rm2" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.167514 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.168236 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.178585 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.182131 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9"] Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.184140 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.190107 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.191200 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.202189 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9"] Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.215497 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547870-29rm2"] Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.256157 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sqh6\" (UniqueName: \"kubernetes.io/projected/5d0714e5-c95e-4bca-8c34-abeff1b1fd92-kube-api-access-2sqh6\") pod \"auto-csr-approver-29547870-29rm2\" (UID: \"5d0714e5-c95e-4bca-8c34-abeff1b1fd92\") " pod="openshift-infra/auto-csr-approver-29547870-29rm2" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.256225 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14b5f1dc-f0be-4c41-87a0-d623568079c0-config-volume\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.256252 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14b5f1dc-f0be-4c41-87a0-d623568079c0-secret-volume\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.256346 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96frn\" (UniqueName: \"kubernetes.io/projected/14b5f1dc-f0be-4c41-87a0-d623568079c0-kube-api-access-96frn\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.359410 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14b5f1dc-f0be-4c41-87a0-d623568079c0-config-volume\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.359548 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14b5f1dc-f0be-4c41-87a0-d623568079c0-secret-volume\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.360272 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14b5f1dc-f0be-4c41-87a0-d623568079c0-config-volume\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.361084 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96frn\" (UniqueName: \"kubernetes.io/projected/14b5f1dc-f0be-4c41-87a0-d623568079c0-kube-api-access-96frn\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.361279 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sqh6\" (UniqueName: \"kubernetes.io/projected/5d0714e5-c95e-4bca-8c34-abeff1b1fd92-kube-api-access-2sqh6\") pod \"auto-csr-approver-29547870-29rm2\" (UID: \"5d0714e5-c95e-4bca-8c34-abeff1b1fd92\") " pod="openshift-infra/auto-csr-approver-29547870-29rm2" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.369847 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14b5f1dc-f0be-4c41-87a0-d623568079c0-secret-volume\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.380569 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96frn\" (UniqueName: \"kubernetes.io/projected/14b5f1dc-f0be-4c41-87a0-d623568079c0-kube-api-access-96frn\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.392564 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sqh6\" (UniqueName: \"kubernetes.io/projected/5d0714e5-c95e-4bca-8c34-abeff1b1fd92-kube-api-access-2sqh6\") pod \"auto-csr-approver-29547870-29rm2\" (UID: \"5d0714e5-c95e-4bca-8c34-abeff1b1fd92\") " pod="openshift-infra/auto-csr-approver-29547870-29rm2" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.493457 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547870-29rm2" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.506422 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:01 crc kubenswrapper[4761]: I0307 08:30:01.047865 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9"] Mar 07 08:30:01 crc kubenswrapper[4761]: I0307 08:30:01.059517 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547870-29rm2"] Mar 07 08:30:01 crc kubenswrapper[4761]: W0307 08:30:01.071486 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d0714e5_c95e_4bca_8c34_abeff1b1fd92.slice/crio-20d5de24d1d9893de2fdd72e94b88eebb09a1a4244f27d7659a79b5f8eb54594 WatchSource:0}: Error finding container 20d5de24d1d9893de2fdd72e94b88eebb09a1a4244f27d7659a79b5f8eb54594: Status 404 returned error can't find the container with id 20d5de24d1d9893de2fdd72e94b88eebb09a1a4244f27d7659a79b5f8eb54594 Mar 07 08:30:01 crc kubenswrapper[4761]: I0307 08:30:01.103524 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" event={"ID":"14b5f1dc-f0be-4c41-87a0-d623568079c0","Type":"ContainerStarted","Data":"2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413"} Mar 07 08:30:01 crc kubenswrapper[4761]: I0307 08:30:01.105071 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547870-29rm2" event={"ID":"5d0714e5-c95e-4bca-8c34-abeff1b1fd92","Type":"ContainerStarted","Data":"20d5de24d1d9893de2fdd72e94b88eebb09a1a4244f27d7659a79b5f8eb54594"} Mar 07 08:30:02 crc kubenswrapper[4761]: I0307 08:30:02.119759 4761 generic.go:334] "Generic (PLEG): container finished" podID="14b5f1dc-f0be-4c41-87a0-d623568079c0" containerID="9ec68e26cc56db9378f3e81051fa808a0bd9358047a5695ad5208364aef8551a" exitCode=0 Mar 07 08:30:02 crc kubenswrapper[4761]: I0307 08:30:02.119806 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" event={"ID":"14b5f1dc-f0be-4c41-87a0-d623568079c0","Type":"ContainerDied","Data":"9ec68e26cc56db9378f3e81051fa808a0bd9358047a5695ad5208364aef8551a"} Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.508516 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.547451 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14b5f1dc-f0be-4c41-87a0-d623568079c0-config-volume\") pod \"14b5f1dc-f0be-4c41-87a0-d623568079c0\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.547504 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96frn\" (UniqueName: \"kubernetes.io/projected/14b5f1dc-f0be-4c41-87a0-d623568079c0-kube-api-access-96frn\") pod \"14b5f1dc-f0be-4c41-87a0-d623568079c0\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.547598 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14b5f1dc-f0be-4c41-87a0-d623568079c0-secret-volume\") pod \"14b5f1dc-f0be-4c41-87a0-d623568079c0\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.561916 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14b5f1dc-f0be-4c41-87a0-d623568079c0-config-volume" (OuterVolumeSpecName: "config-volume") pod "14b5f1dc-f0be-4c41-87a0-d623568079c0" (UID: "14b5f1dc-f0be-4c41-87a0-d623568079c0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.565180 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14b5f1dc-f0be-4c41-87a0-d623568079c0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "14b5f1dc-f0be-4c41-87a0-d623568079c0" (UID: "14b5f1dc-f0be-4c41-87a0-d623568079c0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.571793 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b5f1dc-f0be-4c41-87a0-d623568079c0-kube-api-access-96frn" (OuterVolumeSpecName: "kube-api-access-96frn") pod "14b5f1dc-f0be-4c41-87a0-d623568079c0" (UID: "14b5f1dc-f0be-4c41-87a0-d623568079c0"). InnerVolumeSpecName "kube-api-access-96frn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.650221 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14b5f1dc-f0be-4c41-87a0-d623568079c0-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.650265 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96frn\" (UniqueName: \"kubernetes.io/projected/14b5f1dc-f0be-4c41-87a0-d623568079c0-kube-api-access-96frn\") on node \"crc\" DevicePath \"\"" Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.650276 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14b5f1dc-f0be-4c41-87a0-d623568079c0-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:30:04 crc kubenswrapper[4761]: I0307 08:30:04.154423 4761 generic.go:334] "Generic (PLEG): container finished" podID="5d0714e5-c95e-4bca-8c34-abeff1b1fd92" containerID="2b6d3e93406ac228f6d5810b53d964adf521c73909ac73e439e3d83972514623" exitCode=0 Mar 07 08:30:04 crc kubenswrapper[4761]: I0307 08:30:04.154494 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547870-29rm2" event={"ID":"5d0714e5-c95e-4bca-8c34-abeff1b1fd92","Type":"ContainerDied","Data":"2b6d3e93406ac228f6d5810b53d964adf521c73909ac73e439e3d83972514623"} Mar 07 08:30:04 crc kubenswrapper[4761]: I0307 08:30:04.156675 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" event={"ID":"14b5f1dc-f0be-4c41-87a0-d623568079c0","Type":"ContainerDied","Data":"2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413"} Mar 07 08:30:04 crc kubenswrapper[4761]: I0307 08:30:04.156732 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413" Mar 07 08:30:04 crc kubenswrapper[4761]: I0307 08:30:04.156790 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:04 crc kubenswrapper[4761]: I0307 08:30:04.585128 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6"] Mar 07 08:30:04 crc kubenswrapper[4761]: I0307 08:30:04.594493 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6"] Mar 07 08:30:05 crc kubenswrapper[4761]: E0307 08:30:05.381760 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:05 crc kubenswrapper[4761]: I0307 08:30:05.610138 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547870-29rm2" Mar 07 08:30:05 crc kubenswrapper[4761]: I0307 08:30:05.702330 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sqh6\" (UniqueName: \"kubernetes.io/projected/5d0714e5-c95e-4bca-8c34-abeff1b1fd92-kube-api-access-2sqh6\") pod \"5d0714e5-c95e-4bca-8c34-abeff1b1fd92\" (UID: \"5d0714e5-c95e-4bca-8c34-abeff1b1fd92\") " Mar 07 08:30:05 crc kubenswrapper[4761]: I0307 08:30:05.709112 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d0714e5-c95e-4bca-8c34-abeff1b1fd92-kube-api-access-2sqh6" (OuterVolumeSpecName: "kube-api-access-2sqh6") pod "5d0714e5-c95e-4bca-8c34-abeff1b1fd92" (UID: "5d0714e5-c95e-4bca-8c34-abeff1b1fd92"). InnerVolumeSpecName "kube-api-access-2sqh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:30:05 crc kubenswrapper[4761]: I0307 08:30:05.727491 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a6be2c-da25-42c0-a8fa-075b8273bb65" path="/var/lib/kubelet/pods/66a6be2c-da25-42c0-a8fa-075b8273bb65/volumes" Mar 07 08:30:05 crc kubenswrapper[4761]: I0307 08:30:05.810468 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sqh6\" (UniqueName: \"kubernetes.io/projected/5d0714e5-c95e-4bca-8c34-abeff1b1fd92-kube-api-access-2sqh6\") on node \"crc\" DevicePath \"\"" Mar 07 08:30:06 crc kubenswrapper[4761]: I0307 08:30:06.186161 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547870-29rm2" event={"ID":"5d0714e5-c95e-4bca-8c34-abeff1b1fd92","Type":"ContainerDied","Data":"20d5de24d1d9893de2fdd72e94b88eebb09a1a4244f27d7659a79b5f8eb54594"} Mar 07 08:30:06 crc kubenswrapper[4761]: I0307 08:30:06.186499 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20d5de24d1d9893de2fdd72e94b88eebb09a1a4244f27d7659a79b5f8eb54594" Mar 07 08:30:06 crc kubenswrapper[4761]: I0307 08:30:06.186563 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547870-29rm2" Mar 07 08:30:06 crc kubenswrapper[4761]: I0307 08:30:06.711129 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547864-s7kqr"] Mar 07 08:30:06 crc kubenswrapper[4761]: I0307 08:30:06.726074 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547864-s7kqr"] Mar 07 08:30:07 crc kubenswrapper[4761]: I0307 08:30:07.723426 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2269f929-4b06-4694-8123-6741b2adfa58" path="/var/lib/kubelet/pods/2269f929-4b06-4694-8123-6741b2adfa58/volumes" Mar 07 08:30:13 crc kubenswrapper[4761]: I0307 08:30:13.768334 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:30:13 crc kubenswrapper[4761]: I0307 08:30:13.768939 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:30:13 crc kubenswrapper[4761]: E0307 08:30:13.818485 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:15 crc kubenswrapper[4761]: E0307 08:30:15.432173 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:19 crc kubenswrapper[4761]: I0307 08:30:19.216418 4761 scope.go:117] "RemoveContainer" containerID="090a7e140ac0a1c9c2a8e95ff23a018d80262e37d5a50b31d6f03c5d5e1dc22c" Mar 07 08:30:19 crc kubenswrapper[4761]: I0307 08:30:19.271461 4761 scope.go:117] "RemoveContainer" containerID="b26ebf4b31ad9b755874a090e2400d415f6a366f21084cf982eba0cc6f886633" Mar 07 08:30:25 crc kubenswrapper[4761]: E0307 08:30:25.732392 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:28 crc kubenswrapper[4761]: E0307 08:30:28.560167 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:36 crc kubenswrapper[4761]: E0307 08:30:36.080046 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:43 crc kubenswrapper[4761]: I0307 08:30:43.767956 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:30:43 crc kubenswrapper[4761]: I0307 08:30:43.768516 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:30:43 crc kubenswrapper[4761]: I0307 08:30:43.768566 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:30:43 crc kubenswrapper[4761]: I0307 08:30:43.769561 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:30:43 crc kubenswrapper[4761]: I0307 08:30:43.769618 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" gracePeriod=600 Mar 07 08:30:43 crc kubenswrapper[4761]: E0307 08:30:43.778253 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:43 crc kubenswrapper[4761]: E0307 08:30:43.892330 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:30:44 crc kubenswrapper[4761]: I0307 08:30:44.744058 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" exitCode=0 Mar 07 08:30:44 crc kubenswrapper[4761]: I0307 08:30:44.744133 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba"} Mar 07 08:30:44 crc kubenswrapper[4761]: I0307 08:30:44.744174 4761 scope.go:117] "RemoveContainer" containerID="4205e887a96e2c7dfc1520ac45c44653f6029f5d7474aa135bc6c6eb298eb9d6" Mar 07 08:30:44 crc kubenswrapper[4761]: I0307 08:30:44.745068 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:30:44 crc kubenswrapper[4761]: E0307 08:30:44.745448 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:30:46 crc kubenswrapper[4761]: E0307 08:30:46.128787 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:48 crc kubenswrapper[4761]: E0307 08:30:48.106878 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:48 crc kubenswrapper[4761]: E0307 08:30:48.109893 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:55 crc kubenswrapper[4761]: I0307 08:30:55.705771 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:30:55 crc kubenswrapper[4761]: E0307 08:30:55.706637 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:30:56 crc kubenswrapper[4761]: E0307 08:30:56.494223 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:58 crc kubenswrapper[4761]: E0307 08:30:58.551564 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache]" Mar 07 08:31:09 crc kubenswrapper[4761]: I0307 08:31:09.706855 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:31:09 crc kubenswrapper[4761]: E0307 08:31:09.708256 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:31:13 crc kubenswrapper[4761]: I0307 08:31:13.971089 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rxq8t"] Mar 07 08:31:13 crc kubenswrapper[4761]: E0307 08:31:13.972033 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b5f1dc-f0be-4c41-87a0-d623568079c0" containerName="collect-profiles" Mar 07 08:31:13 crc kubenswrapper[4761]: I0307 08:31:13.972046 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b5f1dc-f0be-4c41-87a0-d623568079c0" containerName="collect-profiles" Mar 07 08:31:13 crc kubenswrapper[4761]: E0307 08:31:13.972071 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0714e5-c95e-4bca-8c34-abeff1b1fd92" containerName="oc" Mar 07 08:31:13 crc kubenswrapper[4761]: I0307 08:31:13.972078 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0714e5-c95e-4bca-8c34-abeff1b1fd92" containerName="oc" Mar 07 08:31:13 crc kubenswrapper[4761]: I0307 08:31:13.972296 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d0714e5-c95e-4bca-8c34-abeff1b1fd92" containerName="oc" Mar 07 08:31:13 crc kubenswrapper[4761]: I0307 08:31:13.972315 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="14b5f1dc-f0be-4c41-87a0-d623568079c0" containerName="collect-profiles" Mar 07 08:31:13 crc kubenswrapper[4761]: I0307 08:31:13.973987 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:13 crc kubenswrapper[4761]: I0307 08:31:13.992525 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxq8t"] Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.099148 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fblqc\" (UniqueName: \"kubernetes.io/projected/2c033ef8-2189-4478-88d9-d9b71894f4cc-kube-api-access-fblqc\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.099276 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-catalog-content\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.099375 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-utilities\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.201774 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-utilities\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.201858 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fblqc\" (UniqueName: \"kubernetes.io/projected/2c033ef8-2189-4478-88d9-d9b71894f4cc-kube-api-access-fblqc\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.201956 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-catalog-content\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.202225 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-utilities\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.202266 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-catalog-content\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.233455 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fblqc\" (UniqueName: \"kubernetes.io/projected/2c033ef8-2189-4478-88d9-d9b71894f4cc-kube-api-access-fblqc\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.315151 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.877812 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxq8t"] Mar 07 08:31:15 crc kubenswrapper[4761]: I0307 08:31:15.188688 4761 generic.go:334] "Generic (PLEG): container finished" podID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerID="f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85" exitCode=0 Mar 07 08:31:15 crc kubenswrapper[4761]: I0307 08:31:15.188763 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxq8t" event={"ID":"2c033ef8-2189-4478-88d9-d9b71894f4cc","Type":"ContainerDied","Data":"f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85"} Mar 07 08:31:15 crc kubenswrapper[4761]: I0307 08:31:15.188794 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxq8t" event={"ID":"2c033ef8-2189-4478-88d9-d9b71894f4cc","Type":"ContainerStarted","Data":"92489cf45605a76e855e0ea2bedf35f690b2ae1f38ec3a79cd12c4fa9173e492"} Mar 07 08:31:16 crc kubenswrapper[4761]: I0307 08:31:16.208272 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxq8t" event={"ID":"2c033ef8-2189-4478-88d9-d9b71894f4cc","Type":"ContainerStarted","Data":"7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51"} Mar 07 08:31:17 crc kubenswrapper[4761]: I0307 08:31:17.227241 4761 generic.go:334] "Generic (PLEG): container finished" podID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerID="7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51" exitCode=0 Mar 07 08:31:17 crc kubenswrapper[4761]: I0307 08:31:17.227298 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxq8t" event={"ID":"2c033ef8-2189-4478-88d9-d9b71894f4cc","Type":"ContainerDied","Data":"7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51"} Mar 07 08:31:18 crc kubenswrapper[4761]: I0307 08:31:18.241180 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxq8t" event={"ID":"2c033ef8-2189-4478-88d9-d9b71894f4cc","Type":"ContainerStarted","Data":"ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818"} Mar 07 08:31:18 crc kubenswrapper[4761]: I0307 08:31:18.262650 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rxq8t" podStartSLOduration=2.831603239 podStartE2EDuration="5.2626332s" podCreationTimestamp="2026-03-07 08:31:13 +0000 UTC" firstStartedPulling="2026-03-07 08:31:15.191731891 +0000 UTC m=+2532.100898376" lastFinishedPulling="2026-03-07 08:31:17.622761842 +0000 UTC m=+2534.531928337" observedRunningTime="2026-03-07 08:31:18.26223631 +0000 UTC m=+2535.171402805" watchObservedRunningTime="2026-03-07 08:31:18.2626332 +0000 UTC m=+2535.171799675" Mar 07 08:31:22 crc kubenswrapper[4761]: I0307 08:31:22.706025 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:31:22 crc kubenswrapper[4761]: E0307 08:31:22.708282 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:31:24 crc kubenswrapper[4761]: I0307 08:31:24.318953 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:24 crc kubenswrapper[4761]: I0307 08:31:24.319535 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:24 crc kubenswrapper[4761]: I0307 08:31:24.408799 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:25 crc kubenswrapper[4761]: I0307 08:31:25.440950 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:25 crc kubenswrapper[4761]: I0307 08:31:25.518648 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxq8t"] Mar 07 08:31:27 crc kubenswrapper[4761]: I0307 08:31:27.361295 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rxq8t" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="registry-server" containerID="cri-o://ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818" gracePeriod=2 Mar 07 08:31:27 crc kubenswrapper[4761]: I0307 08:31:27.954412 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.012046 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fblqc\" (UniqueName: \"kubernetes.io/projected/2c033ef8-2189-4478-88d9-d9b71894f4cc-kube-api-access-fblqc\") pod \"2c033ef8-2189-4478-88d9-d9b71894f4cc\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.012384 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-catalog-content\") pod \"2c033ef8-2189-4478-88d9-d9b71894f4cc\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.012451 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-utilities\") pod \"2c033ef8-2189-4478-88d9-d9b71894f4cc\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.024678 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-utilities" (OuterVolumeSpecName: "utilities") pod "2c033ef8-2189-4478-88d9-d9b71894f4cc" (UID: "2c033ef8-2189-4478-88d9-d9b71894f4cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.047325 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c033ef8-2189-4478-88d9-d9b71894f4cc" (UID: "2c033ef8-2189-4478-88d9-d9b71894f4cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.066375 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c033ef8-2189-4478-88d9-d9b71894f4cc-kube-api-access-fblqc" (OuterVolumeSpecName: "kube-api-access-fblqc") pod "2c033ef8-2189-4478-88d9-d9b71894f4cc" (UID: "2c033ef8-2189-4478-88d9-d9b71894f4cc"). InnerVolumeSpecName "kube-api-access-fblqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.116197 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.116232 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.116244 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fblqc\" (UniqueName: \"kubernetes.io/projected/2c033ef8-2189-4478-88d9-d9b71894f4cc-kube-api-access-fblqc\") on node \"crc\" DevicePath \"\"" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.373983 4761 generic.go:334] "Generic (PLEG): container finished" podID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerID="ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818" exitCode=0 Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.374028 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxq8t" event={"ID":"2c033ef8-2189-4478-88d9-d9b71894f4cc","Type":"ContainerDied","Data":"ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818"} Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.374053 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxq8t" event={"ID":"2c033ef8-2189-4478-88d9-d9b71894f4cc","Type":"ContainerDied","Data":"92489cf45605a76e855e0ea2bedf35f690b2ae1f38ec3a79cd12c4fa9173e492"} Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.374069 4761 scope.go:117] "RemoveContainer" containerID="ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.374202 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.421163 4761 scope.go:117] "RemoveContainer" containerID="7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.430444 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxq8t"] Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.452164 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxq8t"] Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.456032 4761 scope.go:117] "RemoveContainer" containerID="f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.584569 4761 scope.go:117] "RemoveContainer" containerID="ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818" Mar 07 08:31:28 crc kubenswrapper[4761]: E0307 08:31:28.586369 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818\": container with ID starting with ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818 not found: ID does not exist" containerID="ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.586415 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818"} err="failed to get container status \"ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818\": rpc error: code = NotFound desc = could not find container \"ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818\": container with ID starting with ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818 not found: ID does not exist" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.586440 4761 scope.go:117] "RemoveContainer" containerID="7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51" Mar 07 08:31:28 crc kubenswrapper[4761]: E0307 08:31:28.586760 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51\": container with ID starting with 7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51 not found: ID does not exist" containerID="7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.586793 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51"} err="failed to get container status \"7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51\": rpc error: code = NotFound desc = could not find container \"7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51\": container with ID starting with 7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51 not found: ID does not exist" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.586813 4761 scope.go:117] "RemoveContainer" containerID="f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85" Mar 07 08:31:28 crc kubenswrapper[4761]: E0307 08:31:28.587148 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85\": container with ID starting with f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85 not found: ID does not exist" containerID="f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.587205 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85"} err="failed to get container status \"f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85\": rpc error: code = NotFound desc = could not find container \"f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85\": container with ID starting with f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85 not found: ID does not exist" Mar 07 08:31:29 crc kubenswrapper[4761]: I0307 08:31:29.732623 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" path="/var/lib/kubelet/pods/2c033ef8-2189-4478-88d9-d9b71894f4cc/volumes" Mar 07 08:31:34 crc kubenswrapper[4761]: I0307 08:31:34.706166 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:31:34 crc kubenswrapper[4761]: E0307 08:31:34.707312 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:31:49 crc kubenswrapper[4761]: I0307 08:31:49.708365 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:31:49 crc kubenswrapper[4761]: E0307 08:31:49.709317 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.159852 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547872-ztj7l"] Mar 07 08:32:00 crc kubenswrapper[4761]: E0307 08:32:00.162856 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="extract-utilities" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.162948 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="extract-utilities" Mar 07 08:32:00 crc kubenswrapper[4761]: E0307 08:32:00.163041 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="extract-content" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.163104 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="extract-content" Mar 07 08:32:00 crc kubenswrapper[4761]: E0307 08:32:00.163165 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="registry-server" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.163218 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="registry-server" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.163478 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="registry-server" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.164348 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547872-ztj7l" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.166770 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.167757 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.167992 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.179113 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547872-ztj7l"] Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.216169 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27xh7\" (UniqueName: \"kubernetes.io/projected/d4db60eb-e0dd-4faf-88bb-485798fe0bcf-kube-api-access-27xh7\") pod \"auto-csr-approver-29547872-ztj7l\" (UID: \"d4db60eb-e0dd-4faf-88bb-485798fe0bcf\") " pod="openshift-infra/auto-csr-approver-29547872-ztj7l" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.317977 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27xh7\" (UniqueName: \"kubernetes.io/projected/d4db60eb-e0dd-4faf-88bb-485798fe0bcf-kube-api-access-27xh7\") pod \"auto-csr-approver-29547872-ztj7l\" (UID: \"d4db60eb-e0dd-4faf-88bb-485798fe0bcf\") " pod="openshift-infra/auto-csr-approver-29547872-ztj7l" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.346320 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27xh7\" (UniqueName: \"kubernetes.io/projected/d4db60eb-e0dd-4faf-88bb-485798fe0bcf-kube-api-access-27xh7\") pod \"auto-csr-approver-29547872-ztj7l\" (UID: \"d4db60eb-e0dd-4faf-88bb-485798fe0bcf\") " pod="openshift-infra/auto-csr-approver-29547872-ztj7l" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.491782 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547872-ztj7l" Mar 07 08:32:01 crc kubenswrapper[4761]: I0307 08:32:01.029052 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547872-ztj7l"] Mar 07 08:32:01 crc kubenswrapper[4761]: I0307 08:32:01.709265 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:32:01 crc kubenswrapper[4761]: E0307 08:32:01.710263 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:32:01 crc kubenswrapper[4761]: I0307 08:32:01.868930 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547872-ztj7l" event={"ID":"d4db60eb-e0dd-4faf-88bb-485798fe0bcf","Type":"ContainerStarted","Data":"bfea03dc8cbd1736c154119744d4d21847902474d368330097f5ad6dcc290f00"} Mar 07 08:32:02 crc kubenswrapper[4761]: I0307 08:32:02.884353 4761 generic.go:334] "Generic (PLEG): container finished" podID="d4db60eb-e0dd-4faf-88bb-485798fe0bcf" containerID="0b113e193a3a066fddc11a489472112bd26a3752e7d8b8891536b813be907092" exitCode=0 Mar 07 08:32:02 crc kubenswrapper[4761]: I0307 08:32:02.884416 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547872-ztj7l" event={"ID":"d4db60eb-e0dd-4faf-88bb-485798fe0bcf","Type":"ContainerDied","Data":"0b113e193a3a066fddc11a489472112bd26a3752e7d8b8891536b813be907092"} Mar 07 08:32:04 crc kubenswrapper[4761]: I0307 08:32:04.266476 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547872-ztj7l" Mar 07 08:32:04 crc kubenswrapper[4761]: I0307 08:32:04.450828 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27xh7\" (UniqueName: \"kubernetes.io/projected/d4db60eb-e0dd-4faf-88bb-485798fe0bcf-kube-api-access-27xh7\") pod \"d4db60eb-e0dd-4faf-88bb-485798fe0bcf\" (UID: \"d4db60eb-e0dd-4faf-88bb-485798fe0bcf\") " Mar 07 08:32:04 crc kubenswrapper[4761]: I0307 08:32:04.462536 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4db60eb-e0dd-4faf-88bb-485798fe0bcf-kube-api-access-27xh7" (OuterVolumeSpecName: "kube-api-access-27xh7") pod "d4db60eb-e0dd-4faf-88bb-485798fe0bcf" (UID: "d4db60eb-e0dd-4faf-88bb-485798fe0bcf"). InnerVolumeSpecName "kube-api-access-27xh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:32:04 crc kubenswrapper[4761]: I0307 08:32:04.554448 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27xh7\" (UniqueName: \"kubernetes.io/projected/d4db60eb-e0dd-4faf-88bb-485798fe0bcf-kube-api-access-27xh7\") on node \"crc\" DevicePath \"\"" Mar 07 08:32:04 crc kubenswrapper[4761]: I0307 08:32:04.908274 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547872-ztj7l" event={"ID":"d4db60eb-e0dd-4faf-88bb-485798fe0bcf","Type":"ContainerDied","Data":"bfea03dc8cbd1736c154119744d4d21847902474d368330097f5ad6dcc290f00"} Mar 07 08:32:04 crc kubenswrapper[4761]: I0307 08:32:04.908332 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfea03dc8cbd1736c154119744d4d21847902474d368330097f5ad6dcc290f00" Mar 07 08:32:04 crc kubenswrapper[4761]: I0307 08:32:04.908360 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547872-ztj7l" Mar 07 08:32:05 crc kubenswrapper[4761]: I0307 08:32:05.363573 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547866-56rlk"] Mar 07 08:32:05 crc kubenswrapper[4761]: I0307 08:32:05.379983 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547866-56rlk"] Mar 07 08:32:05 crc kubenswrapper[4761]: I0307 08:32:05.723277 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa60f65d-1134-4cac-bd66-fd5a70f064d0" path="/var/lib/kubelet/pods/fa60f65d-1134-4cac-bd66-fd5a70f064d0/volumes" Mar 07 08:32:14 crc kubenswrapper[4761]: I0307 08:32:14.706491 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:32:14 crc kubenswrapper[4761]: E0307 08:32:14.707955 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:32:19 crc kubenswrapper[4761]: I0307 08:32:19.457050 4761 scope.go:117] "RemoveContainer" containerID="f7ea310432b36a6153cf31887d1ac9f40396d49116da350bd0b549363b4e3af6" Mar 07 08:32:27 crc kubenswrapper[4761]: I0307 08:32:27.706495 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:32:27 crc kubenswrapper[4761]: E0307 08:32:27.707921 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:32:42 crc kubenswrapper[4761]: I0307 08:32:42.706542 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:32:42 crc kubenswrapper[4761]: E0307 08:32:42.707540 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:32:54 crc kubenswrapper[4761]: I0307 08:32:54.706331 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:32:54 crc kubenswrapper[4761]: E0307 08:32:54.707627 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:33:07 crc kubenswrapper[4761]: I0307 08:33:07.706167 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:33:07 crc kubenswrapper[4761]: E0307 08:33:07.708823 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:33:21 crc kubenswrapper[4761]: I0307 08:33:21.708183 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:33:21 crc kubenswrapper[4761]: E0307 08:33:21.709136 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.729416 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z5tdl"] Mar 07 08:33:24 crc kubenswrapper[4761]: E0307 08:33:24.730342 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4db60eb-e0dd-4faf-88bb-485798fe0bcf" containerName="oc" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.730358 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4db60eb-e0dd-4faf-88bb-485798fe0bcf" containerName="oc" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.730634 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4db60eb-e0dd-4faf-88bb-485798fe0bcf" containerName="oc" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.732732 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.746988 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z5tdl"] Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.852004 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f58p\" (UniqueName: \"kubernetes.io/projected/17f198a4-fe86-41a7-91e2-10544cf984b4-kube-api-access-6f58p\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.852271 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-utilities\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.852571 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-catalog-content\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.955096 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f58p\" (UniqueName: \"kubernetes.io/projected/17f198a4-fe86-41a7-91e2-10544cf984b4-kube-api-access-6f58p\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.955202 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-utilities\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.955265 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-catalog-content\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.955807 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-utilities\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.955858 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-catalog-content\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.982399 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f58p\" (UniqueName: \"kubernetes.io/projected/17f198a4-fe86-41a7-91e2-10544cf984b4-kube-api-access-6f58p\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:25 crc kubenswrapper[4761]: I0307 08:33:25.104379 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:25 crc kubenswrapper[4761]: I0307 08:33:25.699218 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z5tdl"] Mar 07 08:33:25 crc kubenswrapper[4761]: I0307 08:33:25.951795 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerStarted","Data":"3bad4741f304f9654073a150f7d9a20a29668f1c7d82ad1cf3b0369848a1f027"} Mar 07 08:33:25 crc kubenswrapper[4761]: I0307 08:33:25.951838 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerStarted","Data":"6a104c053083ae5322f30501eadfe20d8a7b4dd18607d776c30f39913136438a"} Mar 07 08:33:26 crc kubenswrapper[4761]: I0307 08:33:26.973046 4761 generic.go:334] "Generic (PLEG): container finished" podID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerID="3bad4741f304f9654073a150f7d9a20a29668f1c7d82ad1cf3b0369848a1f027" exitCode=0 Mar 07 08:33:26 crc kubenswrapper[4761]: I0307 08:33:26.973249 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerDied","Data":"3bad4741f304f9654073a150f7d9a20a29668f1c7d82ad1cf3b0369848a1f027"} Mar 07 08:33:29 crc kubenswrapper[4761]: I0307 08:33:29.027158 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerStarted","Data":"77327279867cd2a1dd1c36663833c82ed5be9cbc83ccf92450d515c6d0bcfa61"} Mar 07 08:33:31 crc kubenswrapper[4761]: I0307 08:33:31.059908 4761 generic.go:334] "Generic (PLEG): container finished" podID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerID="77327279867cd2a1dd1c36663833c82ed5be9cbc83ccf92450d515c6d0bcfa61" exitCode=0 Mar 07 08:33:31 crc kubenswrapper[4761]: I0307 08:33:31.059971 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerDied","Data":"77327279867cd2a1dd1c36663833c82ed5be9cbc83ccf92450d515c6d0bcfa61"} Mar 07 08:33:32 crc kubenswrapper[4761]: I0307 08:33:32.078681 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerStarted","Data":"5a7911494899f07a9ddb0a3eef2aeefa947512eb70ccc7d054981034b0920baa"} Mar 07 08:33:32 crc kubenswrapper[4761]: I0307 08:33:32.110339 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z5tdl" podStartSLOduration=3.558058677 podStartE2EDuration="8.11032023s" podCreationTimestamp="2026-03-07 08:33:24 +0000 UTC" firstStartedPulling="2026-03-07 08:33:26.97623626 +0000 UTC m=+2663.885402745" lastFinishedPulling="2026-03-07 08:33:31.528497813 +0000 UTC m=+2668.437664298" observedRunningTime="2026-03-07 08:33:32.109383117 +0000 UTC m=+2669.018549672" watchObservedRunningTime="2026-03-07 08:33:32.11032023 +0000 UTC m=+2669.019486705" Mar 07 08:33:34 crc kubenswrapper[4761]: I0307 08:33:34.705951 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:33:34 crc kubenswrapper[4761]: E0307 08:33:34.706816 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:33:35 crc kubenswrapper[4761]: I0307 08:33:35.105143 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:35 crc kubenswrapper[4761]: I0307 08:33:35.105203 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:36 crc kubenswrapper[4761]: I0307 08:33:36.164388 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-z5tdl" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="registry-server" probeResult="failure" output=< Mar 07 08:33:36 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:33:36 crc kubenswrapper[4761]: > Mar 07 08:33:45 crc kubenswrapper[4761]: I0307 08:33:45.185625 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:45 crc kubenswrapper[4761]: I0307 08:33:45.250814 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:47 crc kubenswrapper[4761]: I0307 08:33:47.706535 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:33:47 crc kubenswrapper[4761]: E0307 08:33:47.707650 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:33:49 crc kubenswrapper[4761]: I0307 08:33:49.823425 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z5tdl"] Mar 07 08:33:49 crc kubenswrapper[4761]: I0307 08:33:49.824466 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z5tdl" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="registry-server" containerID="cri-o://5a7911494899f07a9ddb0a3eef2aeefa947512eb70ccc7d054981034b0920baa" gracePeriod=2 Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.320849 4761 generic.go:334] "Generic (PLEG): container finished" podID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerID="5a7911494899f07a9ddb0a3eef2aeefa947512eb70ccc7d054981034b0920baa" exitCode=0 Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.320901 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerDied","Data":"5a7911494899f07a9ddb0a3eef2aeefa947512eb70ccc7d054981034b0920baa"} Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.321647 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerDied","Data":"6a104c053083ae5322f30501eadfe20d8a7b4dd18607d776c30f39913136438a"} Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.321797 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a104c053083ae5322f30501eadfe20d8a7b4dd18607d776c30f39913136438a" Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.404607 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.467398 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-utilities\") pod \"17f198a4-fe86-41a7-91e2-10544cf984b4\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.467989 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f58p\" (UniqueName: \"kubernetes.io/projected/17f198a4-fe86-41a7-91e2-10544cf984b4-kube-api-access-6f58p\") pod \"17f198a4-fe86-41a7-91e2-10544cf984b4\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.468125 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-catalog-content\") pod \"17f198a4-fe86-41a7-91e2-10544cf984b4\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.468208 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-utilities" (OuterVolumeSpecName: "utilities") pod "17f198a4-fe86-41a7-91e2-10544cf984b4" (UID: "17f198a4-fe86-41a7-91e2-10544cf984b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.468664 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.473680 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f198a4-fe86-41a7-91e2-10544cf984b4-kube-api-access-6f58p" (OuterVolumeSpecName: "kube-api-access-6f58p") pod "17f198a4-fe86-41a7-91e2-10544cf984b4" (UID: "17f198a4-fe86-41a7-91e2-10544cf984b4"). InnerVolumeSpecName "kube-api-access-6f58p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.536033 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17f198a4-fe86-41a7-91e2-10544cf984b4" (UID: "17f198a4-fe86-41a7-91e2-10544cf984b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.571805 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f58p\" (UniqueName: \"kubernetes.io/projected/17f198a4-fe86-41a7-91e2-10544cf984b4-kube-api-access-6f58p\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.571836 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:51 crc kubenswrapper[4761]: I0307 08:33:51.341279 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:51 crc kubenswrapper[4761]: I0307 08:33:51.375454 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z5tdl"] Mar 07 08:33:51 crc kubenswrapper[4761]: I0307 08:33:51.384064 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z5tdl"] Mar 07 08:33:51 crc kubenswrapper[4761]: I0307 08:33:51.718753 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" path="/var/lib/kubelet/pods/17f198a4-fe86-41a7-91e2-10544cf984b4/volumes" Mar 07 08:33:53 crc kubenswrapper[4761]: I0307 08:33:53.375228 4761 generic.go:334] "Generic (PLEG): container finished" podID="becfd5e1-5c42-4a2c-83ca-bd7f02855288" containerID="c201185b9a0cebbe4fdd3a6708f02c737bc1bd081bdb2e01162726d4aa7b4d84" exitCode=0 Mar 07 08:33:53 crc kubenswrapper[4761]: I0307 08:33:53.375333 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" event={"ID":"becfd5e1-5c42-4a2c-83ca-bd7f02855288","Type":"ContainerDied","Data":"c201185b9a0cebbe4fdd3a6708f02c737bc1bd081bdb2e01162726d4aa7b4d84"} Mar 07 08:33:54 crc kubenswrapper[4761]: I0307 08:33:54.975516 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.015984 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-secret-0\") pod \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.016509 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-ssh-key-openstack-edpm-ipam\") pod \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.016836 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-inventory\") pod \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.017227 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvn29\" (UniqueName: \"kubernetes.io/projected/becfd5e1-5c42-4a2c-83ca-bd7f02855288-kube-api-access-gvn29\") pod \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.017442 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-combined-ca-bundle\") pod \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.024457 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/becfd5e1-5c42-4a2c-83ca-bd7f02855288-kube-api-access-gvn29" (OuterVolumeSpecName: "kube-api-access-gvn29") pod "becfd5e1-5c42-4a2c-83ca-bd7f02855288" (UID: "becfd5e1-5c42-4a2c-83ca-bd7f02855288"). InnerVolumeSpecName "kube-api-access-gvn29". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.025295 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "becfd5e1-5c42-4a2c-83ca-bd7f02855288" (UID: "becfd5e1-5c42-4a2c-83ca-bd7f02855288"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.068637 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "becfd5e1-5c42-4a2c-83ca-bd7f02855288" (UID: "becfd5e1-5c42-4a2c-83ca-bd7f02855288"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.084106 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-inventory" (OuterVolumeSpecName: "inventory") pod "becfd5e1-5c42-4a2c-83ca-bd7f02855288" (UID: "becfd5e1-5c42-4a2c-83ca-bd7f02855288"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.087946 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "becfd5e1-5c42-4a2c-83ca-bd7f02855288" (UID: "becfd5e1-5c42-4a2c-83ca-bd7f02855288"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.121403 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvn29\" (UniqueName: \"kubernetes.io/projected/becfd5e1-5c42-4a2c-83ca-bd7f02855288-kube-api-access-gvn29\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.121441 4761 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.121456 4761 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.121468 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.121482 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.407029 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" event={"ID":"becfd5e1-5c42-4a2c-83ca-bd7f02855288","Type":"ContainerDied","Data":"fb06cafc75ca98eee0a01f20cb296ae25aa8b32b6410c7c9c6af7c16cfc47495"} Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.407082 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb06cafc75ca98eee0a01f20cb296ae25aa8b32b6410c7c9c6af7c16cfc47495" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.407108 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.519773 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9"] Mar 07 08:33:55 crc kubenswrapper[4761]: E0307 08:33:55.520274 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="extract-content" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.520294 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="extract-content" Mar 07 08:33:55 crc kubenswrapper[4761]: E0307 08:33:55.520314 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="becfd5e1-5c42-4a2c-83ca-bd7f02855288" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.520322 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="becfd5e1-5c42-4a2c-83ca-bd7f02855288" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 07 08:33:55 crc kubenswrapper[4761]: E0307 08:33:55.520346 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="registry-server" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.520351 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="registry-server" Mar 07 08:33:55 crc kubenswrapper[4761]: E0307 08:33:55.520361 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="extract-utilities" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.520367 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="extract-utilities" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.520574 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="registry-server" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.520594 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="becfd5e1-5c42-4a2c-83ca-bd7f02855288" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.521418 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.524831 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.524839 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.525080 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.525194 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.525233 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.525299 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.525415 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.532873 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9"] Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633207 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633570 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633601 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbpb9\" (UniqueName: \"kubernetes.io/projected/46b536e5-c591-42d8-8903-51e4078bfa09-kube-api-access-gbpb9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633629 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633660 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633685 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633738 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633764 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633802 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633823 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/46b536e5-c591-42d8-8903-51e4078bfa09-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633843 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.735585 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.735644 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.735683 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbpb9\" (UniqueName: \"kubernetes.io/projected/46b536e5-c591-42d8-8903-51e4078bfa09-kube-api-access-gbpb9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.735738 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.736146 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.736466 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.736537 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.736584 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.736657 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.736692 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/46b536e5-c591-42d8-8903-51e4078bfa09-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.736742 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.737844 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/46b536e5-c591-42d8-8903-51e4078bfa09-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.739810 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.739831 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.740037 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.741617 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.741750 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.742081 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.743580 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.743741 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.744315 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.754010 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbpb9\" (UniqueName: \"kubernetes.io/projected/46b536e5-c591-42d8-8903-51e4078bfa09-kube-api-access-gbpb9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.849026 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:56 crc kubenswrapper[4761]: I0307 08:33:56.432208 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9"] Mar 07 08:33:56 crc kubenswrapper[4761]: W0307 08:33:56.432933 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46b536e5_c591_42d8_8903_51e4078bfa09.slice/crio-1be2be2d164977c810f096912728f47bddce688288b73b9874418d367f37dac6 WatchSource:0}: Error finding container 1be2be2d164977c810f096912728f47bddce688288b73b9874418d367f37dac6: Status 404 returned error can't find the container with id 1be2be2d164977c810f096912728f47bddce688288b73b9874418d367f37dac6 Mar 07 08:33:57 crc kubenswrapper[4761]: I0307 08:33:57.432652 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" event={"ID":"46b536e5-c591-42d8-8903-51e4078bfa09","Type":"ContainerStarted","Data":"a4a135c32523adb6b908df78ee9b2c4c7c2e3acf5ea86537762f94fe317ebaf5"} Mar 07 08:33:57 crc kubenswrapper[4761]: I0307 08:33:57.433058 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" event={"ID":"46b536e5-c591-42d8-8903-51e4078bfa09","Type":"ContainerStarted","Data":"1be2be2d164977c810f096912728f47bddce688288b73b9874418d367f37dac6"} Mar 07 08:33:57 crc kubenswrapper[4761]: I0307 08:33:57.468261 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" podStartSLOduration=1.902067036 podStartE2EDuration="2.468241892s" podCreationTimestamp="2026-03-07 08:33:55 +0000 UTC" firstStartedPulling="2026-03-07 08:33:56.434982606 +0000 UTC m=+2693.344149081" lastFinishedPulling="2026-03-07 08:33:57.001157452 +0000 UTC m=+2693.910323937" observedRunningTime="2026-03-07 08:33:57.460500539 +0000 UTC m=+2694.369667054" watchObservedRunningTime="2026-03-07 08:33:57.468241892 +0000 UTC m=+2694.377408357" Mar 07 08:33:58 crc kubenswrapper[4761]: I0307 08:33:58.705187 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:33:58 crc kubenswrapper[4761]: E0307 08:33:58.705668 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.133510 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547874-cftxn"] Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.135173 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547874-cftxn" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.136896 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.137201 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.138248 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.149531 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547874-cftxn"] Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.282995 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbvwg\" (UniqueName: \"kubernetes.io/projected/bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8-kube-api-access-bbvwg\") pod \"auto-csr-approver-29547874-cftxn\" (UID: \"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8\") " pod="openshift-infra/auto-csr-approver-29547874-cftxn" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.385878 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbvwg\" (UniqueName: \"kubernetes.io/projected/bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8-kube-api-access-bbvwg\") pod \"auto-csr-approver-29547874-cftxn\" (UID: \"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8\") " pod="openshift-infra/auto-csr-approver-29547874-cftxn" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.408854 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbvwg\" (UniqueName: \"kubernetes.io/projected/bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8-kube-api-access-bbvwg\") pod \"auto-csr-approver-29547874-cftxn\" (UID: \"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8\") " pod="openshift-infra/auto-csr-approver-29547874-cftxn" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.467544 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547874-cftxn" Mar 07 08:34:00 crc kubenswrapper[4761]: W0307 08:34:00.973201 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb0b09e6_7c82_44b5_93e3_f1b14abd8fe8.slice/crio-7b1ee6af349804b9ed6514b5623e545ee1e6ca023d0d1fa5cf2ac93f17dddc0a WatchSource:0}: Error finding container 7b1ee6af349804b9ed6514b5623e545ee1e6ca023d0d1fa5cf2ac93f17dddc0a: Status 404 returned error can't find the container with id 7b1ee6af349804b9ed6514b5623e545ee1e6ca023d0d1fa5cf2ac93f17dddc0a Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.977376 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547874-cftxn"] Mar 07 08:34:01 crc kubenswrapper[4761]: I0307 08:34:01.500289 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547874-cftxn" event={"ID":"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8","Type":"ContainerStarted","Data":"7b1ee6af349804b9ed6514b5623e545ee1e6ca023d0d1fa5cf2ac93f17dddc0a"} Mar 07 08:34:02 crc kubenswrapper[4761]: I0307 08:34:02.511671 4761 generic.go:334] "Generic (PLEG): container finished" podID="bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8" containerID="8d0570c7d65ba7f66903e755abe533aa432b280534b4abb0c933ffd5817b4e9a" exitCode=0 Mar 07 08:34:02 crc kubenswrapper[4761]: I0307 08:34:02.511754 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547874-cftxn" event={"ID":"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8","Type":"ContainerDied","Data":"8d0570c7d65ba7f66903e755abe533aa432b280534b4abb0c933ffd5817b4e9a"} Mar 07 08:34:03 crc kubenswrapper[4761]: I0307 08:34:03.931425 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547874-cftxn" Mar 07 08:34:04 crc kubenswrapper[4761]: I0307 08:34:04.094792 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbvwg\" (UniqueName: \"kubernetes.io/projected/bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8-kube-api-access-bbvwg\") pod \"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8\" (UID: \"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8\") " Mar 07 08:34:04 crc kubenswrapper[4761]: I0307 08:34:04.102336 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8-kube-api-access-bbvwg" (OuterVolumeSpecName: "kube-api-access-bbvwg") pod "bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8" (UID: "bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8"). InnerVolumeSpecName "kube-api-access-bbvwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:34:04 crc kubenswrapper[4761]: I0307 08:34:04.198559 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbvwg\" (UniqueName: \"kubernetes.io/projected/bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8-kube-api-access-bbvwg\") on node \"crc\" DevicePath \"\"" Mar 07 08:34:04 crc kubenswrapper[4761]: I0307 08:34:04.537847 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547874-cftxn" event={"ID":"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8","Type":"ContainerDied","Data":"7b1ee6af349804b9ed6514b5623e545ee1e6ca023d0d1fa5cf2ac93f17dddc0a"} Mar 07 08:34:04 crc kubenswrapper[4761]: I0307 08:34:04.537922 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b1ee6af349804b9ed6514b5623e545ee1e6ca023d0d1fa5cf2ac93f17dddc0a" Mar 07 08:34:04 crc kubenswrapper[4761]: I0307 08:34:04.538001 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547874-cftxn" Mar 07 08:34:05 crc kubenswrapper[4761]: I0307 08:34:05.047385 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547868-8xrsg"] Mar 07 08:34:05 crc kubenswrapper[4761]: I0307 08:34:05.090637 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547868-8xrsg"] Mar 07 08:34:05 crc kubenswrapper[4761]: I0307 08:34:05.719037 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a1f3f8-f795-495f-bb5a-58c9511a97f2" path="/var/lib/kubelet/pods/24a1f3f8-f795-495f-bb5a-58c9511a97f2/volumes" Mar 07 08:34:10 crc kubenswrapper[4761]: I0307 08:34:10.705452 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:34:10 crc kubenswrapper[4761]: E0307 08:34:10.706249 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:34:19 crc kubenswrapper[4761]: I0307 08:34:19.637492 4761 scope.go:117] "RemoveContainer" containerID="90c37012179316620f55f1d98a9814b88c29420d9b4a62e6c8a02946fc534a5e" Mar 07 08:34:25 crc kubenswrapper[4761]: I0307 08:34:25.706342 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:34:25 crc kubenswrapper[4761]: E0307 08:34:25.707303 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:34:36 crc kubenswrapper[4761]: I0307 08:34:36.706213 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:34:36 crc kubenswrapper[4761]: E0307 08:34:36.706885 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:34:49 crc kubenswrapper[4761]: I0307 08:34:49.706185 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:34:49 crc kubenswrapper[4761]: E0307 08:34:49.707065 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:35:00 crc kubenswrapper[4761]: I0307 08:35:00.706676 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:35:00 crc kubenswrapper[4761]: E0307 08:35:00.707649 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:35:13 crc kubenswrapper[4761]: I0307 08:35:13.719514 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:35:13 crc kubenswrapper[4761]: E0307 08:35:13.720617 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:35:24 crc kubenswrapper[4761]: I0307 08:35:24.709022 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:35:24 crc kubenswrapper[4761]: E0307 08:35:24.710375 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:35:37 crc kubenswrapper[4761]: I0307 08:35:37.706768 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:35:37 crc kubenswrapper[4761]: E0307 08:35:37.707551 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:35:52 crc kubenswrapper[4761]: I0307 08:35:52.707151 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:35:54 crc kubenswrapper[4761]: I0307 08:35:54.009626 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"e9ae313dee187491879d92687a5c7e694903f5090225f2eed07b87d6931c5e34"} Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.164013 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547876-b8vfr"] Mar 07 08:36:00 crc kubenswrapper[4761]: E0307 08:36:00.165324 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8" containerName="oc" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.165344 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8" containerName="oc" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.165662 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8" containerName="oc" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.166873 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547876-b8vfr" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.171422 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.171665 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.176097 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.180083 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547876-b8vfr"] Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.312528 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsfkf\" (UniqueName: \"kubernetes.io/projected/e75faa3e-3ab5-4269-b968-2d7cdf2d4e67-kube-api-access-zsfkf\") pod \"auto-csr-approver-29547876-b8vfr\" (UID: \"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67\") " pod="openshift-infra/auto-csr-approver-29547876-b8vfr" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.414708 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsfkf\" (UniqueName: \"kubernetes.io/projected/e75faa3e-3ab5-4269-b968-2d7cdf2d4e67-kube-api-access-zsfkf\") pod \"auto-csr-approver-29547876-b8vfr\" (UID: \"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67\") " pod="openshift-infra/auto-csr-approver-29547876-b8vfr" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.439315 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsfkf\" (UniqueName: \"kubernetes.io/projected/e75faa3e-3ab5-4269-b968-2d7cdf2d4e67-kube-api-access-zsfkf\") pod \"auto-csr-approver-29547876-b8vfr\" (UID: \"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67\") " pod="openshift-infra/auto-csr-approver-29547876-b8vfr" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.529399 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547876-b8vfr" Mar 07 08:36:01 crc kubenswrapper[4761]: I0307 08:36:01.041028 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547876-b8vfr"] Mar 07 08:36:01 crc kubenswrapper[4761]: I0307 08:36:01.048928 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:36:01 crc kubenswrapper[4761]: I0307 08:36:01.111387 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547876-b8vfr" event={"ID":"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67","Type":"ContainerStarted","Data":"f23319b4c4b2e59e79c0477b3c634b8eb94575f9096b8029e02441c9ce18a1b8"} Mar 07 08:36:03 crc kubenswrapper[4761]: I0307 08:36:03.175242 4761 generic.go:334] "Generic (PLEG): container finished" podID="e75faa3e-3ab5-4269-b968-2d7cdf2d4e67" containerID="c479cd71fe870c2b2433ab8369219c685056c34b9bd6dacda1943e5d420d633a" exitCode=0 Mar 07 08:36:03 crc kubenswrapper[4761]: I0307 08:36:03.175680 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547876-b8vfr" event={"ID":"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67","Type":"ContainerDied","Data":"c479cd71fe870c2b2433ab8369219c685056c34b9bd6dacda1943e5d420d633a"} Mar 07 08:36:04 crc kubenswrapper[4761]: I0307 08:36:04.675587 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547876-b8vfr" Mar 07 08:36:04 crc kubenswrapper[4761]: I0307 08:36:04.716196 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsfkf\" (UniqueName: \"kubernetes.io/projected/e75faa3e-3ab5-4269-b968-2d7cdf2d4e67-kube-api-access-zsfkf\") pod \"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67\" (UID: \"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67\") " Mar 07 08:36:04 crc kubenswrapper[4761]: I0307 08:36:04.739003 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e75faa3e-3ab5-4269-b968-2d7cdf2d4e67-kube-api-access-zsfkf" (OuterVolumeSpecName: "kube-api-access-zsfkf") pod "e75faa3e-3ab5-4269-b968-2d7cdf2d4e67" (UID: "e75faa3e-3ab5-4269-b968-2d7cdf2d4e67"). InnerVolumeSpecName "kube-api-access-zsfkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:36:04 crc kubenswrapper[4761]: I0307 08:36:04.819442 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsfkf\" (UniqueName: \"kubernetes.io/projected/e75faa3e-3ab5-4269-b968-2d7cdf2d4e67-kube-api-access-zsfkf\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:05 crc kubenswrapper[4761]: I0307 08:36:05.204939 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547876-b8vfr" event={"ID":"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67","Type":"ContainerDied","Data":"f23319b4c4b2e59e79c0477b3c634b8eb94575f9096b8029e02441c9ce18a1b8"} Mar 07 08:36:05 crc kubenswrapper[4761]: I0307 08:36:05.205327 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f23319b4c4b2e59e79c0477b3c634b8eb94575f9096b8029e02441c9ce18a1b8" Mar 07 08:36:05 crc kubenswrapper[4761]: I0307 08:36:05.205448 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547876-b8vfr" Mar 07 08:36:05 crc kubenswrapper[4761]: I0307 08:36:05.759565 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547870-29rm2"] Mar 07 08:36:05 crc kubenswrapper[4761]: I0307 08:36:05.770368 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547870-29rm2"] Mar 07 08:36:07 crc kubenswrapper[4761]: I0307 08:36:07.723515 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d0714e5-c95e-4bca-8c34-abeff1b1fd92" path="/var/lib/kubelet/pods/5d0714e5-c95e-4bca-8c34-abeff1b1fd92/volumes" Mar 07 08:36:19 crc kubenswrapper[4761]: I0307 08:36:19.755555 4761 scope.go:117] "RemoveContainer" containerID="2b6d3e93406ac228f6d5810b53d964adf521c73909ac73e439e3d83972514623" Mar 07 08:36:22 crc kubenswrapper[4761]: I0307 08:36:22.437411 4761 generic.go:334] "Generic (PLEG): container finished" podID="46b536e5-c591-42d8-8903-51e4078bfa09" containerID="a4a135c32523adb6b908df78ee9b2c4c7c2e3acf5ea86537762f94fe317ebaf5" exitCode=0 Mar 07 08:36:22 crc kubenswrapper[4761]: I0307 08:36:22.437491 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" event={"ID":"46b536e5-c591-42d8-8903-51e4078bfa09","Type":"ContainerDied","Data":"a4a135c32523adb6b908df78ee9b2c4c7c2e3acf5ea86537762f94fe317ebaf5"} Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.025372 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.191386 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-ssh-key-openstack-edpm-ipam\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.191829 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-0\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.191923 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-inventory\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.191997 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbpb9\" (UniqueName: \"kubernetes.io/projected/46b536e5-c591-42d8-8903-51e4078bfa09-kube-api-access-gbpb9\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.192122 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-0\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.192174 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-1\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.192211 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/46b536e5-c591-42d8-8903-51e4078bfa09-nova-extra-config-0\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.192239 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-2\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.192323 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-combined-ca-bundle\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.192363 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-1\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.192454 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-3\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.201027 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b536e5-c591-42d8-8903-51e4078bfa09-kube-api-access-gbpb9" (OuterVolumeSpecName: "kube-api-access-gbpb9") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "kube-api-access-gbpb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.204046 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.231162 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.234038 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-inventory" (OuterVolumeSpecName: "inventory") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.234545 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.238945 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.239159 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46b536e5-c591-42d8-8903-51e4078bfa09-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.250499 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.250652 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.258103 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.272337 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295430 4761 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295465 4761 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295475 4761 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295485 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295493 4761 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295503 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295511 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbpb9\" (UniqueName: \"kubernetes.io/projected/46b536e5-c591-42d8-8903-51e4078bfa09-kube-api-access-gbpb9\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295521 4761 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295531 4761 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295540 4761 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/46b536e5-c591-42d8-8903-51e4078bfa09-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295548 4761 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.464835 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" event={"ID":"46b536e5-c591-42d8-8903-51e4078bfa09","Type":"ContainerDied","Data":"1be2be2d164977c810f096912728f47bddce688288b73b9874418d367f37dac6"} Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.464877 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1be2be2d164977c810f096912728f47bddce688288b73b9874418d367f37dac6" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.464896 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.589891 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp"] Mar 07 08:36:24 crc kubenswrapper[4761]: E0307 08:36:24.590860 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b536e5-c591-42d8-8903-51e4078bfa09" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.590998 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b536e5-c591-42d8-8903-51e4078bfa09" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 07 08:36:24 crc kubenswrapper[4761]: E0307 08:36:24.591214 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75faa3e-3ab5-4269-b968-2d7cdf2d4e67" containerName="oc" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.591324 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75faa3e-3ab5-4269-b968-2d7cdf2d4e67" containerName="oc" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.591698 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b536e5-c591-42d8-8903-51e4078bfa09" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.591844 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e75faa3e-3ab5-4269-b968-2d7cdf2d4e67" containerName="oc" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.593178 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.597137 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.597272 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.597480 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.597624 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.597637 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.601340 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp"] Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.707213 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.707564 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.707958 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-845k8\" (UniqueName: \"kubernetes.io/projected/79854881-fc6e-4976-b6c3-ac4f5fa42340-kube-api-access-845k8\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.708048 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.708142 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.709221 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.709652 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.812810 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.812908 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.813034 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.813090 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.813209 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.813404 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.813440 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-845k8\" (UniqueName: \"kubernetes.io/projected/79854881-fc6e-4976-b6c3-ac4f5fa42340-kube-api-access-845k8\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.817133 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.817401 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.817864 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.818054 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.819677 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.819968 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.840902 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-845k8\" (UniqueName: \"kubernetes.io/projected/79854881-fc6e-4976-b6c3-ac4f5fa42340-kube-api-access-845k8\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.915960 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:25 crc kubenswrapper[4761]: I0307 08:36:25.650178 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp"] Mar 07 08:36:26 crc kubenswrapper[4761]: I0307 08:36:26.487536 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" event={"ID":"79854881-fc6e-4976-b6c3-ac4f5fa42340","Type":"ContainerStarted","Data":"e2980767a48ca6d21f99f6f57ae2beffe3d21252c493528e764f4a96402e02d8"} Mar 07 08:36:26 crc kubenswrapper[4761]: I0307 08:36:26.488096 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" event={"ID":"79854881-fc6e-4976-b6c3-ac4f5fa42340","Type":"ContainerStarted","Data":"02a38687e93d46cc7f9a25a41b190443ea01f1df71cdb1980ccfca77caa03327"} Mar 07 08:36:26 crc kubenswrapper[4761]: I0307 08:36:26.542494 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" podStartSLOduration=2.14539501 podStartE2EDuration="2.542474711s" podCreationTimestamp="2026-03-07 08:36:24 +0000 UTC" firstStartedPulling="2026-03-07 08:36:25.653176581 +0000 UTC m=+2842.562343066" lastFinishedPulling="2026-03-07 08:36:26.050256252 +0000 UTC m=+2842.959422767" observedRunningTime="2026-03-07 08:36:26.503945098 +0000 UTC m=+2843.413111583" watchObservedRunningTime="2026-03-07 08:36:26.542474711 +0000 UTC m=+2843.451641186" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.080249 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6tgtx"] Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.083838 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.127575 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6tgtx"] Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.207595 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-catalog-content\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.207656 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kkb2\" (UniqueName: \"kubernetes.io/projected/0a385d41-4b89-4bbc-8062-13d2b3d045da-kube-api-access-4kkb2\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.207793 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-utilities\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.310896 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-catalog-content\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.311319 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kkb2\" (UniqueName: \"kubernetes.io/projected/0a385d41-4b89-4bbc-8062-13d2b3d045da-kube-api-access-4kkb2\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.311575 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-catalog-content\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.311819 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-utilities\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.312188 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-utilities\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.337998 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kkb2\" (UniqueName: \"kubernetes.io/projected/0a385d41-4b89-4bbc-8062-13d2b3d045da-kube-api-access-4kkb2\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.458576 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.956657 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6tgtx"] Mar 07 08:37:00 crc kubenswrapper[4761]: I0307 08:37:00.934896 4761 generic.go:334] "Generic (PLEG): container finished" podID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerID="e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd" exitCode=0 Mar 07 08:37:00 crc kubenswrapper[4761]: I0307 08:37:00.935478 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tgtx" event={"ID":"0a385d41-4b89-4bbc-8062-13d2b3d045da","Type":"ContainerDied","Data":"e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd"} Mar 07 08:37:00 crc kubenswrapper[4761]: I0307 08:37:00.935507 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tgtx" event={"ID":"0a385d41-4b89-4bbc-8062-13d2b3d045da","Type":"ContainerStarted","Data":"6e8269c4ee270015bb9e967525be815f2de6c18eebad469db76101be5c6468a5"} Mar 07 08:37:01 crc kubenswrapper[4761]: I0307 08:37:01.954364 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tgtx" event={"ID":"0a385d41-4b89-4bbc-8062-13d2b3d045da","Type":"ContainerStarted","Data":"f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704"} Mar 07 08:37:08 crc kubenswrapper[4761]: I0307 08:37:08.048624 4761 generic.go:334] "Generic (PLEG): container finished" podID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerID="f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704" exitCode=0 Mar 07 08:37:08 crc kubenswrapper[4761]: I0307 08:37:08.048844 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tgtx" event={"ID":"0a385d41-4b89-4bbc-8062-13d2b3d045da","Type":"ContainerDied","Data":"f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704"} Mar 07 08:37:09 crc kubenswrapper[4761]: I0307 08:37:09.065824 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tgtx" event={"ID":"0a385d41-4b89-4bbc-8062-13d2b3d045da","Type":"ContainerStarted","Data":"deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1"} Mar 07 08:37:09 crc kubenswrapper[4761]: I0307 08:37:09.099693 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6tgtx" podStartSLOduration=2.485886484 podStartE2EDuration="10.099671981s" podCreationTimestamp="2026-03-07 08:36:59 +0000 UTC" firstStartedPulling="2026-03-07 08:37:00.938270781 +0000 UTC m=+2877.847437296" lastFinishedPulling="2026-03-07 08:37:08.552056278 +0000 UTC m=+2885.461222793" observedRunningTime="2026-03-07 08:37:09.08962428 +0000 UTC m=+2885.998790765" watchObservedRunningTime="2026-03-07 08:37:09.099671981 +0000 UTC m=+2886.008838466" Mar 07 08:37:09 crc kubenswrapper[4761]: I0307 08:37:09.459873 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:37:09 crc kubenswrapper[4761]: I0307 08:37:09.460223 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:37:10 crc kubenswrapper[4761]: I0307 08:37:10.556822 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6tgtx" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="registry-server" probeResult="failure" output=< Mar 07 08:37:10 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:37:10 crc kubenswrapper[4761]: > Mar 07 08:37:19 crc kubenswrapper[4761]: I0307 08:37:19.561422 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:37:19 crc kubenswrapper[4761]: I0307 08:37:19.656411 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:37:19 crc kubenswrapper[4761]: I0307 08:37:19.825908 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6tgtx"] Mar 07 08:37:21 crc kubenswrapper[4761]: I0307 08:37:21.208815 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6tgtx" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="registry-server" containerID="cri-o://deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1" gracePeriod=2 Mar 07 08:37:21 crc kubenswrapper[4761]: I0307 08:37:21.818357 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:37:21 crc kubenswrapper[4761]: I0307 08:37:21.983666 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-utilities\") pod \"0a385d41-4b89-4bbc-8062-13d2b3d045da\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " Mar 07 08:37:21 crc kubenswrapper[4761]: I0307 08:37:21.984019 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kkb2\" (UniqueName: \"kubernetes.io/projected/0a385d41-4b89-4bbc-8062-13d2b3d045da-kube-api-access-4kkb2\") pod \"0a385d41-4b89-4bbc-8062-13d2b3d045da\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " Mar 07 08:37:21 crc kubenswrapper[4761]: I0307 08:37:21.984124 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-catalog-content\") pod \"0a385d41-4b89-4bbc-8062-13d2b3d045da\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " Mar 07 08:37:21 crc kubenswrapper[4761]: I0307 08:37:21.984996 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-utilities" (OuterVolumeSpecName: "utilities") pod "0a385d41-4b89-4bbc-8062-13d2b3d045da" (UID: "0a385d41-4b89-4bbc-8062-13d2b3d045da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:37:21 crc kubenswrapper[4761]: I0307 08:37:21.993319 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a385d41-4b89-4bbc-8062-13d2b3d045da-kube-api-access-4kkb2" (OuterVolumeSpecName: "kube-api-access-4kkb2") pod "0a385d41-4b89-4bbc-8062-13d2b3d045da" (UID: "0a385d41-4b89-4bbc-8062-13d2b3d045da"). InnerVolumeSpecName "kube-api-access-4kkb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.097790 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kkb2\" (UniqueName: \"kubernetes.io/projected/0a385d41-4b89-4bbc-8062-13d2b3d045da-kube-api-access-4kkb2\") on node \"crc\" DevicePath \"\"" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.098095 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.143476 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a385d41-4b89-4bbc-8062-13d2b3d045da" (UID: "0a385d41-4b89-4bbc-8062-13d2b3d045da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.201249 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.224147 4761 generic.go:334] "Generic (PLEG): container finished" podID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerID="deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1" exitCode=0 Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.224196 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tgtx" event={"ID":"0a385d41-4b89-4bbc-8062-13d2b3d045da","Type":"ContainerDied","Data":"deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1"} Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.224226 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.224255 4761 scope.go:117] "RemoveContainer" containerID="deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.224236 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tgtx" event={"ID":"0a385d41-4b89-4bbc-8062-13d2b3d045da","Type":"ContainerDied","Data":"6e8269c4ee270015bb9e967525be815f2de6c18eebad469db76101be5c6468a5"} Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.259394 4761 scope.go:117] "RemoveContainer" containerID="f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.272100 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6tgtx"] Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.281786 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6tgtx"] Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.302445 4761 scope.go:117] "RemoveContainer" containerID="e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.359140 4761 scope.go:117] "RemoveContainer" containerID="deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1" Mar 07 08:37:22 crc kubenswrapper[4761]: E0307 08:37:22.359603 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1\": container with ID starting with deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1 not found: ID does not exist" containerID="deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.359661 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1"} err="failed to get container status \"deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1\": rpc error: code = NotFound desc = could not find container \"deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1\": container with ID starting with deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1 not found: ID does not exist" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.359687 4761 scope.go:117] "RemoveContainer" containerID="f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704" Mar 07 08:37:22 crc kubenswrapper[4761]: E0307 08:37:22.360009 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704\": container with ID starting with f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704 not found: ID does not exist" containerID="f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.360029 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704"} err="failed to get container status \"f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704\": rpc error: code = NotFound desc = could not find container \"f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704\": container with ID starting with f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704 not found: ID does not exist" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.360062 4761 scope.go:117] "RemoveContainer" containerID="e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd" Mar 07 08:37:22 crc kubenswrapper[4761]: E0307 08:37:22.360854 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd\": container with ID starting with e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd not found: ID does not exist" containerID="e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.360893 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd"} err="failed to get container status \"e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd\": rpc error: code = NotFound desc = could not find container \"e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd\": container with ID starting with e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd not found: ID does not exist" Mar 07 08:37:23 crc kubenswrapper[4761]: I0307 08:37:23.720443 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" path="/var/lib/kubelet/pods/0a385d41-4b89-4bbc-8062-13d2b3d045da/volumes" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.168853 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547878-rqgwh"] Mar 07 08:38:00 crc kubenswrapper[4761]: E0307 08:38:00.169911 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="extract-content" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.169927 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="extract-content" Mar 07 08:38:00 crc kubenswrapper[4761]: E0307 08:38:00.169982 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="registry-server" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.169990 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="registry-server" Mar 07 08:38:00 crc kubenswrapper[4761]: E0307 08:38:00.170010 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="extract-utilities" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.170020 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="extract-utilities" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.170306 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="registry-server" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.171289 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.175142 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.175387 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.175536 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.190618 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547878-rqgwh"] Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.304665 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnx6m\" (UniqueName: \"kubernetes.io/projected/c04530ab-a8e3-4851-a74b-f33ead6584f2-kube-api-access-wnx6m\") pod \"auto-csr-approver-29547878-rqgwh\" (UID: \"c04530ab-a8e3-4851-a74b-f33ead6584f2\") " pod="openshift-infra/auto-csr-approver-29547878-rqgwh" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.408062 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnx6m\" (UniqueName: \"kubernetes.io/projected/c04530ab-a8e3-4851-a74b-f33ead6584f2-kube-api-access-wnx6m\") pod \"auto-csr-approver-29547878-rqgwh\" (UID: \"c04530ab-a8e3-4851-a74b-f33ead6584f2\") " pod="openshift-infra/auto-csr-approver-29547878-rqgwh" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.433812 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnx6m\" (UniqueName: \"kubernetes.io/projected/c04530ab-a8e3-4851-a74b-f33ead6584f2-kube-api-access-wnx6m\") pod \"auto-csr-approver-29547878-rqgwh\" (UID: \"c04530ab-a8e3-4851-a74b-f33ead6584f2\") " pod="openshift-infra/auto-csr-approver-29547878-rqgwh" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.491579 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" Mar 07 08:38:01 crc kubenswrapper[4761]: I0307 08:38:01.007971 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547878-rqgwh"] Mar 07 08:38:01 crc kubenswrapper[4761]: W0307 08:38:01.014348 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc04530ab_a8e3_4851_a74b_f33ead6584f2.slice/crio-c001891557017afa39f4281173f03948b47ca8839cbf2d402520d9077b58ff23 WatchSource:0}: Error finding container c001891557017afa39f4281173f03948b47ca8839cbf2d402520d9077b58ff23: Status 404 returned error can't find the container with id c001891557017afa39f4281173f03948b47ca8839cbf2d402520d9077b58ff23 Mar 07 08:38:01 crc kubenswrapper[4761]: I0307 08:38:01.767680 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" event={"ID":"c04530ab-a8e3-4851-a74b-f33ead6584f2","Type":"ContainerStarted","Data":"c001891557017afa39f4281173f03948b47ca8839cbf2d402520d9077b58ff23"} Mar 07 08:38:02 crc kubenswrapper[4761]: I0307 08:38:02.784678 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" event={"ID":"c04530ab-a8e3-4851-a74b-f33ead6584f2","Type":"ContainerStarted","Data":"39df8a4331f8fb8c36a05aebc4c360e5a67d3b1a2496b4852eed73be235926a7"} Mar 07 08:38:02 crc kubenswrapper[4761]: I0307 08:38:02.807429 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" podStartSLOduration=1.9908660569999999 podStartE2EDuration="2.807414479s" podCreationTimestamp="2026-03-07 08:38:00 +0000 UTC" firstStartedPulling="2026-03-07 08:38:01.018795608 +0000 UTC m=+2937.927962083" lastFinishedPulling="2026-03-07 08:38:01.83534401 +0000 UTC m=+2938.744510505" observedRunningTime="2026-03-07 08:38:02.803003768 +0000 UTC m=+2939.712170243" watchObservedRunningTime="2026-03-07 08:38:02.807414479 +0000 UTC m=+2939.716580954" Mar 07 08:38:03 crc kubenswrapper[4761]: I0307 08:38:03.804860 4761 generic.go:334] "Generic (PLEG): container finished" podID="c04530ab-a8e3-4851-a74b-f33ead6584f2" containerID="39df8a4331f8fb8c36a05aebc4c360e5a67d3b1a2496b4852eed73be235926a7" exitCode=0 Mar 07 08:38:03 crc kubenswrapper[4761]: I0307 08:38:03.805215 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" event={"ID":"c04530ab-a8e3-4851-a74b-f33ead6584f2","Type":"ContainerDied","Data":"39df8a4331f8fb8c36a05aebc4c360e5a67d3b1a2496b4852eed73be235926a7"} Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.274789 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.441860 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnx6m\" (UniqueName: \"kubernetes.io/projected/c04530ab-a8e3-4851-a74b-f33ead6584f2-kube-api-access-wnx6m\") pod \"c04530ab-a8e3-4851-a74b-f33ead6584f2\" (UID: \"c04530ab-a8e3-4851-a74b-f33ead6584f2\") " Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.450855 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04530ab-a8e3-4851-a74b-f33ead6584f2-kube-api-access-wnx6m" (OuterVolumeSpecName: "kube-api-access-wnx6m") pod "c04530ab-a8e3-4851-a74b-f33ead6584f2" (UID: "c04530ab-a8e3-4851-a74b-f33ead6584f2"). InnerVolumeSpecName "kube-api-access-wnx6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.544562 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnx6m\" (UniqueName: \"kubernetes.io/projected/c04530ab-a8e3-4851-a74b-f33ead6584f2-kube-api-access-wnx6m\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.837239 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" event={"ID":"c04530ab-a8e3-4851-a74b-f33ead6584f2","Type":"ContainerDied","Data":"c001891557017afa39f4281173f03948b47ca8839cbf2d402520d9077b58ff23"} Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.837296 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c001891557017afa39f4281173f03948b47ca8839cbf2d402520d9077b58ff23" Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.837376 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.897111 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547872-ztj7l"] Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.910407 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547872-ztj7l"] Mar 07 08:38:07 crc kubenswrapper[4761]: I0307 08:38:07.722023 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4db60eb-e0dd-4faf-88bb-485798fe0bcf" path="/var/lib/kubelet/pods/d4db60eb-e0dd-4faf-88bb-485798fe0bcf/volumes" Mar 07 08:38:13 crc kubenswrapper[4761]: I0307 08:38:13.768775 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:38:13 crc kubenswrapper[4761]: I0307 08:38:13.769520 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:38:19 crc kubenswrapper[4761]: I0307 08:38:19.879680 4761 scope.go:117] "RemoveContainer" containerID="0b113e193a3a066fddc11a489472112bd26a3752e7d8b8891536b813be907092" Mar 07 08:38:43 crc kubenswrapper[4761]: I0307 08:38:43.768235 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:38:43 crc kubenswrapper[4761]: I0307 08:38:43.771335 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:38:53 crc kubenswrapper[4761]: I0307 08:38:53.481519 4761 generic.go:334] "Generic (PLEG): container finished" podID="79854881-fc6e-4976-b6c3-ac4f5fa42340" containerID="e2980767a48ca6d21f99f6f57ae2beffe3d21252c493528e764f4a96402e02d8" exitCode=0 Mar 07 08:38:53 crc kubenswrapper[4761]: I0307 08:38:53.481610 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" event={"ID":"79854881-fc6e-4976-b6c3-ac4f5fa42340","Type":"ContainerDied","Data":"e2980767a48ca6d21f99f6f57ae2beffe3d21252c493528e764f4a96402e02d8"} Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.125310 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.240185 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-inventory\") pod \"79854881-fc6e-4976-b6c3-ac4f5fa42340\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.240499 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-1\") pod \"79854881-fc6e-4976-b6c3-ac4f5fa42340\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.240575 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ssh-key-openstack-edpm-ipam\") pod \"79854881-fc6e-4976-b6c3-ac4f5fa42340\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.240636 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-845k8\" (UniqueName: \"kubernetes.io/projected/79854881-fc6e-4976-b6c3-ac4f5fa42340-kube-api-access-845k8\") pod \"79854881-fc6e-4976-b6c3-ac4f5fa42340\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.240680 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-telemetry-combined-ca-bundle\") pod \"79854881-fc6e-4976-b6c3-ac4f5fa42340\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.240791 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-0\") pod \"79854881-fc6e-4976-b6c3-ac4f5fa42340\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.240978 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-2\") pod \"79854881-fc6e-4976-b6c3-ac4f5fa42340\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.247255 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "79854881-fc6e-4976-b6c3-ac4f5fa42340" (UID: "79854881-fc6e-4976-b6c3-ac4f5fa42340"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.247300 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79854881-fc6e-4976-b6c3-ac4f5fa42340-kube-api-access-845k8" (OuterVolumeSpecName: "kube-api-access-845k8") pod "79854881-fc6e-4976-b6c3-ac4f5fa42340" (UID: "79854881-fc6e-4976-b6c3-ac4f5fa42340"). InnerVolumeSpecName "kube-api-access-845k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.274760 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "79854881-fc6e-4976-b6c3-ac4f5fa42340" (UID: "79854881-fc6e-4976-b6c3-ac4f5fa42340"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.276686 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "79854881-fc6e-4976-b6c3-ac4f5fa42340" (UID: "79854881-fc6e-4976-b6c3-ac4f5fa42340"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.282866 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "79854881-fc6e-4976-b6c3-ac4f5fa42340" (UID: "79854881-fc6e-4976-b6c3-ac4f5fa42340"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.288581 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "79854881-fc6e-4976-b6c3-ac4f5fa42340" (UID: "79854881-fc6e-4976-b6c3-ac4f5fa42340"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.298985 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-inventory" (OuterVolumeSpecName: "inventory") pod "79854881-fc6e-4976-b6c3-ac4f5fa42340" (UID: "79854881-fc6e-4976-b6c3-ac4f5fa42340"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.346521 4761 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.346596 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.346625 4761 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.346653 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.346682 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-845k8\" (UniqueName: \"kubernetes.io/projected/79854881-fc6e-4976-b6c3-ac4f5fa42340-kube-api-access-845k8\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.346708 4761 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.346762 4761 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.513180 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" event={"ID":"79854881-fc6e-4976-b6c3-ac4f5fa42340","Type":"ContainerDied","Data":"02a38687e93d46cc7f9a25a41b190443ea01f1df71cdb1980ccfca77caa03327"} Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.513227 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02a38687e93d46cc7f9a25a41b190443ea01f1df71cdb1980ccfca77caa03327" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.513330 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.637026 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb"] Mar 07 08:38:55 crc kubenswrapper[4761]: E0307 08:38:55.637646 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04530ab-a8e3-4851-a74b-f33ead6584f2" containerName="oc" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.637661 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04530ab-a8e3-4851-a74b-f33ead6584f2" containerName="oc" Mar 07 08:38:55 crc kubenswrapper[4761]: E0307 08:38:55.637684 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79854881-fc6e-4976-b6c3-ac4f5fa42340" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.637702 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="79854881-fc6e-4976-b6c3-ac4f5fa42340" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.638054 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="79854881-fc6e-4976-b6c3-ac4f5fa42340" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.638077 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04530ab-a8e3-4851-a74b-f33ead6584f2" containerName="oc" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.639123 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.643164 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.643452 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.643684 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.643874 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.644020 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.653032 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb"] Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.757257 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.757947 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.759187 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.759361 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.759531 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4wzb\" (UniqueName: \"kubernetes.io/projected/729bd1e7-c268-4327-b30b-3f946a06775e-kube-api-access-c4wzb\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.759708 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.759948 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.861884 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.861954 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.862035 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4wzb\" (UniqueName: \"kubernetes.io/projected/729bd1e7-c268-4327-b30b-3f946a06775e-kube-api-access-c4wzb\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.862112 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.862211 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.862320 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.862556 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.867352 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.867552 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.868244 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.870409 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.870572 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.876810 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.890275 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4wzb\" (UniqueName: \"kubernetes.io/projected/729bd1e7-c268-4327-b30b-3f946a06775e-kube-api-access-c4wzb\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.995827 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:56 crc kubenswrapper[4761]: W0307 08:38:56.595104 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod729bd1e7_c268_4327_b30b_3f946a06775e.slice/crio-f3db0c2d7381ca7bef6ce2c29cc7a1a53d8834f6c13c7e8c2b814515b23f0b24 WatchSource:0}: Error finding container f3db0c2d7381ca7bef6ce2c29cc7a1a53d8834f6c13c7e8c2b814515b23f0b24: Status 404 returned error can't find the container with id f3db0c2d7381ca7bef6ce2c29cc7a1a53d8834f6c13c7e8c2b814515b23f0b24 Mar 07 08:38:56 crc kubenswrapper[4761]: I0307 08:38:56.600369 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb"] Mar 07 08:38:57 crc kubenswrapper[4761]: I0307 08:38:57.536686 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" event={"ID":"729bd1e7-c268-4327-b30b-3f946a06775e","Type":"ContainerStarted","Data":"6e6d185dfcb89f3e0e91fad74016b5cbbd79caa5e7b96b16f937715fdc7f5853"} Mar 07 08:38:57 crc kubenswrapper[4761]: I0307 08:38:57.537344 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" event={"ID":"729bd1e7-c268-4327-b30b-3f946a06775e","Type":"ContainerStarted","Data":"f3db0c2d7381ca7bef6ce2c29cc7a1a53d8834f6c13c7e8c2b814515b23f0b24"} Mar 07 08:38:57 crc kubenswrapper[4761]: I0307 08:38:57.561334 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" podStartSLOduration=2.0541560739999998 podStartE2EDuration="2.561304346s" podCreationTimestamp="2026-03-07 08:38:55 +0000 UTC" firstStartedPulling="2026-03-07 08:38:56.5996757 +0000 UTC m=+2993.508842175" lastFinishedPulling="2026-03-07 08:38:57.106823962 +0000 UTC m=+2994.015990447" observedRunningTime="2026-03-07 08:38:57.559998534 +0000 UTC m=+2994.469165009" watchObservedRunningTime="2026-03-07 08:38:57.561304346 +0000 UTC m=+2994.470470821" Mar 07 08:39:13 crc kubenswrapper[4761]: I0307 08:39:13.768279 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:39:13 crc kubenswrapper[4761]: I0307 08:39:13.769142 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:39:13 crc kubenswrapper[4761]: I0307 08:39:13.769207 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:39:13 crc kubenswrapper[4761]: I0307 08:39:13.770637 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9ae313dee187491879d92687a5c7e694903f5090225f2eed07b87d6931c5e34"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:39:13 crc kubenswrapper[4761]: I0307 08:39:13.770762 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://e9ae313dee187491879d92687a5c7e694903f5090225f2eed07b87d6931c5e34" gracePeriod=600 Mar 07 08:39:14 crc kubenswrapper[4761]: I0307 08:39:14.782345 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="e9ae313dee187491879d92687a5c7e694903f5090225f2eed07b87d6931c5e34" exitCode=0 Mar 07 08:39:14 crc kubenswrapper[4761]: I0307 08:39:14.782701 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"e9ae313dee187491879d92687a5c7e694903f5090225f2eed07b87d6931c5e34"} Mar 07 08:39:14 crc kubenswrapper[4761]: I0307 08:39:14.782768 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca"} Mar 07 08:39:14 crc kubenswrapper[4761]: I0307 08:39:14.782797 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.174864 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547880-4p5x8"] Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.179133 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.216953 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.216972 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.220338 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.220815 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547880-4p5x8"] Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.345328 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hkd7\" (UniqueName: \"kubernetes.io/projected/26fb18a8-5400-4b0e-9f6f-47ad9c34e855-kube-api-access-9hkd7\") pod \"auto-csr-approver-29547880-4p5x8\" (UID: \"26fb18a8-5400-4b0e-9f6f-47ad9c34e855\") " pod="openshift-infra/auto-csr-approver-29547880-4p5x8" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.446955 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hkd7\" (UniqueName: \"kubernetes.io/projected/26fb18a8-5400-4b0e-9f6f-47ad9c34e855-kube-api-access-9hkd7\") pod \"auto-csr-approver-29547880-4p5x8\" (UID: \"26fb18a8-5400-4b0e-9f6f-47ad9c34e855\") " pod="openshift-infra/auto-csr-approver-29547880-4p5x8" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.481213 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hkd7\" (UniqueName: \"kubernetes.io/projected/26fb18a8-5400-4b0e-9f6f-47ad9c34e855-kube-api-access-9hkd7\") pod \"auto-csr-approver-29547880-4p5x8\" (UID: \"26fb18a8-5400-4b0e-9f6f-47ad9c34e855\") " pod="openshift-infra/auto-csr-approver-29547880-4p5x8" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.557692 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.871707 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547880-4p5x8"] Mar 07 08:40:00 crc kubenswrapper[4761]: W0307 08:40:00.889404 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26fb18a8_5400_4b0e_9f6f_47ad9c34e855.slice/crio-a4936a3da9fb90859752e5b83fa55b3854c97ffb691af283fac444d7a98f9e6a WatchSource:0}: Error finding container a4936a3da9fb90859752e5b83fa55b3854c97ffb691af283fac444d7a98f9e6a: Status 404 returned error can't find the container with id a4936a3da9fb90859752e5b83fa55b3854c97ffb691af283fac444d7a98f9e6a Mar 07 08:40:01 crc kubenswrapper[4761]: I0307 08:40:01.432648 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" event={"ID":"26fb18a8-5400-4b0e-9f6f-47ad9c34e855","Type":"ContainerStarted","Data":"a4936a3da9fb90859752e5b83fa55b3854c97ffb691af283fac444d7a98f9e6a"} Mar 07 08:40:02 crc kubenswrapper[4761]: I0307 08:40:02.445799 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" event={"ID":"26fb18a8-5400-4b0e-9f6f-47ad9c34e855","Type":"ContainerStarted","Data":"5645833573c137c62acdca0e5dbcbaf1825a9c618414f8121090966cb4f346a1"} Mar 07 08:40:02 crc kubenswrapper[4761]: I0307 08:40:02.481377 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" podStartSLOduration=1.447865937 podStartE2EDuration="2.481358365s" podCreationTimestamp="2026-03-07 08:40:00 +0000 UTC" firstStartedPulling="2026-03-07 08:40:00.898013929 +0000 UTC m=+3057.807180414" lastFinishedPulling="2026-03-07 08:40:01.931506367 +0000 UTC m=+3058.840672842" observedRunningTime="2026-03-07 08:40:02.463006976 +0000 UTC m=+3059.372173451" watchObservedRunningTime="2026-03-07 08:40:02.481358365 +0000 UTC m=+3059.390524840" Mar 07 08:40:03 crc kubenswrapper[4761]: I0307 08:40:03.483981 4761 generic.go:334] "Generic (PLEG): container finished" podID="26fb18a8-5400-4b0e-9f6f-47ad9c34e855" containerID="5645833573c137c62acdca0e5dbcbaf1825a9c618414f8121090966cb4f346a1" exitCode=0 Mar 07 08:40:03 crc kubenswrapper[4761]: I0307 08:40:03.485065 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" event={"ID":"26fb18a8-5400-4b0e-9f6f-47ad9c34e855","Type":"ContainerDied","Data":"5645833573c137c62acdca0e5dbcbaf1825a9c618414f8121090966cb4f346a1"} Mar 07 08:40:04 crc kubenswrapper[4761]: I0307 08:40:04.947015 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.105896 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hkd7\" (UniqueName: \"kubernetes.io/projected/26fb18a8-5400-4b0e-9f6f-47ad9c34e855-kube-api-access-9hkd7\") pod \"26fb18a8-5400-4b0e-9f6f-47ad9c34e855\" (UID: \"26fb18a8-5400-4b0e-9f6f-47ad9c34e855\") " Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.112934 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26fb18a8-5400-4b0e-9f6f-47ad9c34e855-kube-api-access-9hkd7" (OuterVolumeSpecName: "kube-api-access-9hkd7") pod "26fb18a8-5400-4b0e-9f6f-47ad9c34e855" (UID: "26fb18a8-5400-4b0e-9f6f-47ad9c34e855"). InnerVolumeSpecName "kube-api-access-9hkd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.209325 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hkd7\" (UniqueName: \"kubernetes.io/projected/26fb18a8-5400-4b0e-9f6f-47ad9c34e855-kube-api-access-9hkd7\") on node \"crc\" DevicePath \"\"" Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.519011 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" event={"ID":"26fb18a8-5400-4b0e-9f6f-47ad9c34e855","Type":"ContainerDied","Data":"a4936a3da9fb90859752e5b83fa55b3854c97ffb691af283fac444d7a98f9e6a"} Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.519060 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4936a3da9fb90859752e5b83fa55b3854c97ffb691af283fac444d7a98f9e6a" Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.519182 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.562179 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547874-cftxn"] Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.573505 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547874-cftxn"] Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.724545 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8" path="/var/lib/kubelet/pods/bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8/volumes" Mar 07 08:40:20 crc kubenswrapper[4761]: I0307 08:40:20.034395 4761 scope.go:117] "RemoveContainer" containerID="5a7911494899f07a9ddb0a3eef2aeefa947512eb70ccc7d054981034b0920baa" Mar 07 08:40:20 crc kubenswrapper[4761]: I0307 08:40:20.081054 4761 scope.go:117] "RemoveContainer" containerID="77327279867cd2a1dd1c36663833c82ed5be9cbc83ccf92450d515c6d0bcfa61" Mar 07 08:40:20 crc kubenswrapper[4761]: I0307 08:40:20.112146 4761 scope.go:117] "RemoveContainer" containerID="8d0570c7d65ba7f66903e755abe533aa432b280534b4abb0c933ffd5817b4e9a" Mar 07 08:40:20 crc kubenswrapper[4761]: I0307 08:40:20.234657 4761 scope.go:117] "RemoveContainer" containerID="3bad4741f304f9654073a150f7d9a20a29668f1c7d82ad1cf3b0369848a1f027" Mar 07 08:40:59 crc kubenswrapper[4761]: I0307 08:40:59.151447 4761 generic.go:334] "Generic (PLEG): container finished" podID="729bd1e7-c268-4327-b30b-3f946a06775e" containerID="6e6d185dfcb89f3e0e91fad74016b5cbbd79caa5e7b96b16f937715fdc7f5853" exitCode=0 Mar 07 08:40:59 crc kubenswrapper[4761]: I0307 08:40:59.151708 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" event={"ID":"729bd1e7-c268-4327-b30b-3f946a06775e","Type":"ContainerDied","Data":"6e6d185dfcb89f3e0e91fad74016b5cbbd79caa5e7b96b16f937715fdc7f5853"} Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.684256 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.757793 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ssh-key-openstack-edpm-ipam\") pod \"729bd1e7-c268-4327-b30b-3f946a06775e\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.758218 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-2\") pod \"729bd1e7-c268-4327-b30b-3f946a06775e\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.758279 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4wzb\" (UniqueName: \"kubernetes.io/projected/729bd1e7-c268-4327-b30b-3f946a06775e-kube-api-access-c4wzb\") pod \"729bd1e7-c268-4327-b30b-3f946a06775e\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.758323 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-0\") pod \"729bd1e7-c268-4327-b30b-3f946a06775e\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.758407 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-telemetry-power-monitoring-combined-ca-bundle\") pod \"729bd1e7-c268-4327-b30b-3f946a06775e\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.758478 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-inventory\") pod \"729bd1e7-c268-4327-b30b-3f946a06775e\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.758555 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-1\") pod \"729bd1e7-c268-4327-b30b-3f946a06775e\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.765905 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729bd1e7-c268-4327-b30b-3f946a06775e-kube-api-access-c4wzb" (OuterVolumeSpecName: "kube-api-access-c4wzb") pod "729bd1e7-c268-4327-b30b-3f946a06775e" (UID: "729bd1e7-c268-4327-b30b-3f946a06775e"). InnerVolumeSpecName "kube-api-access-c4wzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.766542 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "729bd1e7-c268-4327-b30b-3f946a06775e" (UID: "729bd1e7-c268-4327-b30b-3f946a06775e"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.792417 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "729bd1e7-c268-4327-b30b-3f946a06775e" (UID: "729bd1e7-c268-4327-b30b-3f946a06775e"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.800836 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "729bd1e7-c268-4327-b30b-3f946a06775e" (UID: "729bd1e7-c268-4327-b30b-3f946a06775e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.809100 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-inventory" (OuterVolumeSpecName: "inventory") pod "729bd1e7-c268-4327-b30b-3f946a06775e" (UID: "729bd1e7-c268-4327-b30b-3f946a06775e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.816266 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "729bd1e7-c268-4327-b30b-3f946a06775e" (UID: "729bd1e7-c268-4327-b30b-3f946a06775e"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.818883 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "729bd1e7-c268-4327-b30b-3f946a06775e" (UID: "729bd1e7-c268-4327-b30b-3f946a06775e"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.862313 4761 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.862360 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.862373 4761 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.862383 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.862392 4761 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.862401 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4wzb\" (UniqueName: \"kubernetes.io/projected/729bd1e7-c268-4327-b30b-3f946a06775e-kube-api-access-c4wzb\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.862410 4761 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.175556 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" event={"ID":"729bd1e7-c268-4327-b30b-3f946a06775e","Type":"ContainerDied","Data":"f3db0c2d7381ca7bef6ce2c29cc7a1a53d8834f6c13c7e8c2b814515b23f0b24"} Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.175598 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3db0c2d7381ca7bef6ce2c29cc7a1a53d8834f6c13c7e8c2b814515b23f0b24" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.175634 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.378315 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt"] Mar 07 08:41:01 crc kubenswrapper[4761]: E0307 08:41:01.379151 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26fb18a8-5400-4b0e-9f6f-47ad9c34e855" containerName="oc" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.379220 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fb18a8-5400-4b0e-9f6f-47ad9c34e855" containerName="oc" Mar 07 08:41:01 crc kubenswrapper[4761]: E0307 08:41:01.379242 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729bd1e7-c268-4327-b30b-3f946a06775e" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.379258 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="729bd1e7-c268-4327-b30b-3f946a06775e" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.379767 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="729bd1e7-c268-4327-b30b-3f946a06775e" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.379824 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="26fb18a8-5400-4b0e-9f6f-47ad9c34e855" containerName="oc" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.381313 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.384265 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.384609 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.385520 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.385684 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.385709 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.418605 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt"] Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.478187 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.478544 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtcc9\" (UniqueName: \"kubernetes.io/projected/92c65649-010f-4704-8069-ee58f1d7d383-kube-api-access-dtcc9\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.478578 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.478966 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.479234 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.581610 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.581691 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.581784 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.581885 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.581908 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtcc9\" (UniqueName: \"kubernetes.io/projected/92c65649-010f-4704-8069-ee58f1d7d383-kube-api-access-dtcc9\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.586996 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.587074 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.587505 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.591427 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.614309 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtcc9\" (UniqueName: \"kubernetes.io/projected/92c65649-010f-4704-8069-ee58f1d7d383-kube-api-access-dtcc9\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.721245 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:02 crc kubenswrapper[4761]: W0307 08:41:02.445709 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92c65649_010f_4704_8069_ee58f1d7d383.slice/crio-54ac202b3748e879604826d16f59ec23b2dcf0fd9bb77915c39cc9166bb7f9dd WatchSource:0}: Error finding container 54ac202b3748e879604826d16f59ec23b2dcf0fd9bb77915c39cc9166bb7f9dd: Status 404 returned error can't find the container with id 54ac202b3748e879604826d16f59ec23b2dcf0fd9bb77915c39cc9166bb7f9dd Mar 07 08:41:02 crc kubenswrapper[4761]: I0307 08:41:02.450517 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:41:02 crc kubenswrapper[4761]: I0307 08:41:02.451673 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt"] Mar 07 08:41:03 crc kubenswrapper[4761]: I0307 08:41:03.195784 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" event={"ID":"92c65649-010f-4704-8069-ee58f1d7d383","Type":"ContainerStarted","Data":"54ac202b3748e879604826d16f59ec23b2dcf0fd9bb77915c39cc9166bb7f9dd"} Mar 07 08:41:04 crc kubenswrapper[4761]: I0307 08:41:04.222597 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" event={"ID":"92c65649-010f-4704-8069-ee58f1d7d383","Type":"ContainerStarted","Data":"21b954d3fc796315fd28b49c9714d67bb7403c5de8d58deddaf48737bb4ce02f"} Mar 07 08:41:04 crc kubenswrapper[4761]: I0307 08:41:04.257696 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" podStartSLOduration=2.7460197219999998 podStartE2EDuration="3.257665284s" podCreationTimestamp="2026-03-07 08:41:01 +0000 UTC" firstStartedPulling="2026-03-07 08:41:02.45022116 +0000 UTC m=+3119.359387635" lastFinishedPulling="2026-03-07 08:41:02.961866722 +0000 UTC m=+3119.871033197" observedRunningTime="2026-03-07 08:41:04.242495074 +0000 UTC m=+3121.151661589" watchObservedRunningTime="2026-03-07 08:41:04.257665284 +0000 UTC m=+3121.166831799" Mar 07 08:41:19 crc kubenswrapper[4761]: I0307 08:41:19.435211 4761 generic.go:334] "Generic (PLEG): container finished" podID="92c65649-010f-4704-8069-ee58f1d7d383" containerID="21b954d3fc796315fd28b49c9714d67bb7403c5de8d58deddaf48737bb4ce02f" exitCode=0 Mar 07 08:41:19 crc kubenswrapper[4761]: I0307 08:41:19.437178 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" event={"ID":"92c65649-010f-4704-8069-ee58f1d7d383","Type":"ContainerDied","Data":"21b954d3fc796315fd28b49c9714d67bb7403c5de8d58deddaf48737bb4ce02f"} Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.007387 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.165844 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-inventory\") pod \"92c65649-010f-4704-8069-ee58f1d7d383\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.166367 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-ssh-key-openstack-edpm-ipam\") pod \"92c65649-010f-4704-8069-ee58f1d7d383\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.166619 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtcc9\" (UniqueName: \"kubernetes.io/projected/92c65649-010f-4704-8069-ee58f1d7d383-kube-api-access-dtcc9\") pod \"92c65649-010f-4704-8069-ee58f1d7d383\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.166924 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-1\") pod \"92c65649-010f-4704-8069-ee58f1d7d383\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.166979 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-0\") pod \"92c65649-010f-4704-8069-ee58f1d7d383\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.171711 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c65649-010f-4704-8069-ee58f1d7d383-kube-api-access-dtcc9" (OuterVolumeSpecName: "kube-api-access-dtcc9") pod "92c65649-010f-4704-8069-ee58f1d7d383" (UID: "92c65649-010f-4704-8069-ee58f1d7d383"). InnerVolumeSpecName "kube-api-access-dtcc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.200630 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "92c65649-010f-4704-8069-ee58f1d7d383" (UID: "92c65649-010f-4704-8069-ee58f1d7d383"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.219709 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-inventory" (OuterVolumeSpecName: "inventory") pod "92c65649-010f-4704-8069-ee58f1d7d383" (UID: "92c65649-010f-4704-8069-ee58f1d7d383"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.230336 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "92c65649-010f-4704-8069-ee58f1d7d383" (UID: "92c65649-010f-4704-8069-ee58f1d7d383"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.236307 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "92c65649-010f-4704-8069-ee58f1d7d383" (UID: "92c65649-010f-4704-8069-ee58f1d7d383"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.271756 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtcc9\" (UniqueName: \"kubernetes.io/projected/92c65649-010f-4704-8069-ee58f1d7d383-kube-api-access-dtcc9\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.272133 4761 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.272946 4761 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.273152 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.273345 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.463685 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" event={"ID":"92c65649-010f-4704-8069-ee58f1d7d383","Type":"ContainerDied","Data":"54ac202b3748e879604826d16f59ec23b2dcf0fd9bb77915c39cc9166bb7f9dd"} Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.463942 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.463962 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ac202b3748e879604826d16f59ec23b2dcf0fd9bb77915c39cc9166bb7f9dd" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.842766 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vpb99"] Mar 07 08:41:38 crc kubenswrapper[4761]: E0307 08:41:38.843912 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c65649-010f-4704-8069-ee58f1d7d383" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.843929 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c65649-010f-4704-8069-ee58f1d7d383" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.844242 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c65649-010f-4704-8069-ee58f1d7d383" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.847631 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.891253 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxf9p\" (UniqueName: \"kubernetes.io/projected/449836d3-d241-4d81-88e5-65ee21469bcc-kube-api-access-lxf9p\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.891646 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-catalog-content\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.892533 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-utilities\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.895007 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpb99"] Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.995641 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-utilities\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.995738 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxf9p\" (UniqueName: \"kubernetes.io/projected/449836d3-d241-4d81-88e5-65ee21469bcc-kube-api-access-lxf9p\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.995857 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-catalog-content\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.996279 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-utilities\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.996684 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-catalog-content\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:39 crc kubenswrapper[4761]: I0307 08:41:39.016741 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxf9p\" (UniqueName: \"kubernetes.io/projected/449836d3-d241-4d81-88e5-65ee21469bcc-kube-api-access-lxf9p\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:39 crc kubenswrapper[4761]: I0307 08:41:39.211762 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:39 crc kubenswrapper[4761]: I0307 08:41:39.774844 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpb99"] Mar 07 08:41:40 crc kubenswrapper[4761]: I0307 08:41:40.721975 4761 generic.go:334] "Generic (PLEG): container finished" podID="449836d3-d241-4d81-88e5-65ee21469bcc" containerID="97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf" exitCode=0 Mar 07 08:41:40 crc kubenswrapper[4761]: I0307 08:41:40.722823 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpb99" event={"ID":"449836d3-d241-4d81-88e5-65ee21469bcc","Type":"ContainerDied","Data":"97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf"} Mar 07 08:41:40 crc kubenswrapper[4761]: I0307 08:41:40.722855 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpb99" event={"ID":"449836d3-d241-4d81-88e5-65ee21469bcc","Type":"ContainerStarted","Data":"da65cf17e2306610eb36004657d24427244f656afd564077e66d7b381f6fc2a5"} Mar 07 08:41:41 crc kubenswrapper[4761]: I0307 08:41:41.735938 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpb99" event={"ID":"449836d3-d241-4d81-88e5-65ee21469bcc","Type":"ContainerStarted","Data":"8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08"} Mar 07 08:41:42 crc kubenswrapper[4761]: I0307 08:41:42.753323 4761 generic.go:334] "Generic (PLEG): container finished" podID="449836d3-d241-4d81-88e5-65ee21469bcc" containerID="8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08" exitCode=0 Mar 07 08:41:42 crc kubenswrapper[4761]: I0307 08:41:42.753687 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpb99" event={"ID":"449836d3-d241-4d81-88e5-65ee21469bcc","Type":"ContainerDied","Data":"8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08"} Mar 07 08:41:43 crc kubenswrapper[4761]: I0307 08:41:43.769405 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:41:43 crc kubenswrapper[4761]: I0307 08:41:43.769858 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:41:43 crc kubenswrapper[4761]: I0307 08:41:43.771524 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpb99" event={"ID":"449836d3-d241-4d81-88e5-65ee21469bcc","Type":"ContainerStarted","Data":"ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190"} Mar 07 08:41:43 crc kubenswrapper[4761]: I0307 08:41:43.804837 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vpb99" podStartSLOduration=3.330991992 podStartE2EDuration="5.804814859s" podCreationTimestamp="2026-03-07 08:41:38 +0000 UTC" firstStartedPulling="2026-03-07 08:41:40.725571054 +0000 UTC m=+3157.634737549" lastFinishedPulling="2026-03-07 08:41:43.199393941 +0000 UTC m=+3160.108560416" observedRunningTime="2026-03-07 08:41:43.78727102 +0000 UTC m=+3160.696437505" watchObservedRunningTime="2026-03-07 08:41:43.804814859 +0000 UTC m=+3160.713981344" Mar 07 08:41:49 crc kubenswrapper[4761]: I0307 08:41:49.212479 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:49 crc kubenswrapper[4761]: I0307 08:41:49.213465 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:49 crc kubenswrapper[4761]: I0307 08:41:49.270939 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:49 crc kubenswrapper[4761]: I0307 08:41:49.947708 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:50 crc kubenswrapper[4761]: I0307 08:41:50.077597 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpb99"] Mar 07 08:41:51 crc kubenswrapper[4761]: I0307 08:41:51.853839 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vpb99" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="registry-server" containerID="cri-o://ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190" gracePeriod=2 Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.535942 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.667727 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-utilities\") pod \"449836d3-d241-4d81-88e5-65ee21469bcc\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.667819 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxf9p\" (UniqueName: \"kubernetes.io/projected/449836d3-d241-4d81-88e5-65ee21469bcc-kube-api-access-lxf9p\") pod \"449836d3-d241-4d81-88e5-65ee21469bcc\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.668018 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-catalog-content\") pod \"449836d3-d241-4d81-88e5-65ee21469bcc\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.669429 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-utilities" (OuterVolumeSpecName: "utilities") pod "449836d3-d241-4d81-88e5-65ee21469bcc" (UID: "449836d3-d241-4d81-88e5-65ee21469bcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.678220 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/449836d3-d241-4d81-88e5-65ee21469bcc-kube-api-access-lxf9p" (OuterVolumeSpecName: "kube-api-access-lxf9p") pod "449836d3-d241-4d81-88e5-65ee21469bcc" (UID: "449836d3-d241-4d81-88e5-65ee21469bcc"). InnerVolumeSpecName "kube-api-access-lxf9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.699772 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "449836d3-d241-4d81-88e5-65ee21469bcc" (UID: "449836d3-d241-4d81-88e5-65ee21469bcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.771667 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.771708 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxf9p\" (UniqueName: \"kubernetes.io/projected/449836d3-d241-4d81-88e5-65ee21469bcc-kube-api-access-lxf9p\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.771738 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.877456 4761 generic.go:334] "Generic (PLEG): container finished" podID="449836d3-d241-4d81-88e5-65ee21469bcc" containerID="ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190" exitCode=0 Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.877525 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpb99" event={"ID":"449836d3-d241-4d81-88e5-65ee21469bcc","Type":"ContainerDied","Data":"ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190"} Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.877554 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpb99" event={"ID":"449836d3-d241-4d81-88e5-65ee21469bcc","Type":"ContainerDied","Data":"da65cf17e2306610eb36004657d24427244f656afd564077e66d7b381f6fc2a5"} Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.877569 4761 scope.go:117] "RemoveContainer" containerID="ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.877781 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.927655 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpb99"] Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.927883 4761 scope.go:117] "RemoveContainer" containerID="8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.943419 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpb99"] Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.961990 4761 scope.go:117] "RemoveContainer" containerID="97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf" Mar 07 08:41:53 crc kubenswrapper[4761]: I0307 08:41:53.019995 4761 scope.go:117] "RemoveContainer" containerID="ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190" Mar 07 08:41:53 crc kubenswrapper[4761]: E0307 08:41:53.020513 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190\": container with ID starting with ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190 not found: ID does not exist" containerID="ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190" Mar 07 08:41:53 crc kubenswrapper[4761]: I0307 08:41:53.020554 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190"} err="failed to get container status \"ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190\": rpc error: code = NotFound desc = could not find container \"ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190\": container with ID starting with ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190 not found: ID does not exist" Mar 07 08:41:53 crc kubenswrapper[4761]: I0307 08:41:53.020578 4761 scope.go:117] "RemoveContainer" containerID="8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08" Mar 07 08:41:53 crc kubenswrapper[4761]: E0307 08:41:53.020864 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08\": container with ID starting with 8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08 not found: ID does not exist" containerID="8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08" Mar 07 08:41:53 crc kubenswrapper[4761]: I0307 08:41:53.020927 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08"} err="failed to get container status \"8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08\": rpc error: code = NotFound desc = could not find container \"8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08\": container with ID starting with 8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08 not found: ID does not exist" Mar 07 08:41:53 crc kubenswrapper[4761]: I0307 08:41:53.020960 4761 scope.go:117] "RemoveContainer" containerID="97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf" Mar 07 08:41:53 crc kubenswrapper[4761]: E0307 08:41:53.021277 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf\": container with ID starting with 97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf not found: ID does not exist" containerID="97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf" Mar 07 08:41:53 crc kubenswrapper[4761]: I0307 08:41:53.021309 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf"} err="failed to get container status \"97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf\": rpc error: code = NotFound desc = could not find container \"97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf\": container with ID starting with 97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf not found: ID does not exist" Mar 07 08:41:53 crc kubenswrapper[4761]: I0307 08:41:53.727435 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" path="/var/lib/kubelet/pods/449836d3-d241-4d81-88e5-65ee21469bcc/volumes" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.147553 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547882-pvsvc"] Mar 07 08:42:00 crc kubenswrapper[4761]: E0307 08:42:00.148965 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="extract-utilities" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.148983 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="extract-utilities" Mar 07 08:42:00 crc kubenswrapper[4761]: E0307 08:42:00.148994 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="extract-content" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.149002 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="extract-content" Mar 07 08:42:00 crc kubenswrapper[4761]: E0307 08:42:00.149050 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="registry-server" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.149058 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="registry-server" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.149327 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="registry-server" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.150342 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547882-pvsvc" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.152768 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.153320 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.153512 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.161757 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547882-pvsvc"] Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.187925 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n576z\" (UniqueName: \"kubernetes.io/projected/729ebc86-ef22-4f0a-9ad4-e1a72a03fa48-kube-api-access-n576z\") pod \"auto-csr-approver-29547882-pvsvc\" (UID: \"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48\") " pod="openshift-infra/auto-csr-approver-29547882-pvsvc" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.290387 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n576z\" (UniqueName: \"kubernetes.io/projected/729ebc86-ef22-4f0a-9ad4-e1a72a03fa48-kube-api-access-n576z\") pod \"auto-csr-approver-29547882-pvsvc\" (UID: \"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48\") " pod="openshift-infra/auto-csr-approver-29547882-pvsvc" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.310571 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n576z\" (UniqueName: \"kubernetes.io/projected/729ebc86-ef22-4f0a-9ad4-e1a72a03fa48-kube-api-access-n576z\") pod \"auto-csr-approver-29547882-pvsvc\" (UID: \"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48\") " pod="openshift-infra/auto-csr-approver-29547882-pvsvc" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.477507 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547882-pvsvc" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.993473 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547882-pvsvc"] Mar 07 08:42:01 crc kubenswrapper[4761]: I0307 08:42:01.990185 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547882-pvsvc" event={"ID":"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48","Type":"ContainerStarted","Data":"26334ea35b43176fa1216619830535f7578c3cf35ef409b88c5967345dad0cea"} Mar 07 08:42:03 crc kubenswrapper[4761]: I0307 08:42:03.025407 4761 generic.go:334] "Generic (PLEG): container finished" podID="729ebc86-ef22-4f0a-9ad4-e1a72a03fa48" containerID="308e64b1726bbae20e797d0badc3a6d633889e1dd0fb91ebac20a675b32d7de8" exitCode=0 Mar 07 08:42:03 crc kubenswrapper[4761]: I0307 08:42:03.025821 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547882-pvsvc" event={"ID":"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48","Type":"ContainerDied","Data":"308e64b1726bbae20e797d0badc3a6d633889e1dd0fb91ebac20a675b32d7de8"} Mar 07 08:42:04 crc kubenswrapper[4761]: I0307 08:42:04.452766 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547882-pvsvc" Mar 07 08:42:04 crc kubenswrapper[4761]: I0307 08:42:04.508482 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n576z\" (UniqueName: \"kubernetes.io/projected/729ebc86-ef22-4f0a-9ad4-e1a72a03fa48-kube-api-access-n576z\") pod \"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48\" (UID: \"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48\") " Mar 07 08:42:04 crc kubenswrapper[4761]: I0307 08:42:04.522267 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729ebc86-ef22-4f0a-9ad4-e1a72a03fa48-kube-api-access-n576z" (OuterVolumeSpecName: "kube-api-access-n576z") pod "729ebc86-ef22-4f0a-9ad4-e1a72a03fa48" (UID: "729ebc86-ef22-4f0a-9ad4-e1a72a03fa48"). InnerVolumeSpecName "kube-api-access-n576z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:42:04 crc kubenswrapper[4761]: I0307 08:42:04.612182 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n576z\" (UniqueName: \"kubernetes.io/projected/729ebc86-ef22-4f0a-9ad4-e1a72a03fa48-kube-api-access-n576z\") on node \"crc\" DevicePath \"\"" Mar 07 08:42:05 crc kubenswrapper[4761]: I0307 08:42:05.052664 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547882-pvsvc" event={"ID":"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48","Type":"ContainerDied","Data":"26334ea35b43176fa1216619830535f7578c3cf35ef409b88c5967345dad0cea"} Mar 07 08:42:05 crc kubenswrapper[4761]: I0307 08:42:05.053061 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26334ea35b43176fa1216619830535f7578c3cf35ef409b88c5967345dad0cea" Mar 07 08:42:05 crc kubenswrapper[4761]: I0307 08:42:05.053125 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547882-pvsvc" Mar 07 08:42:05 crc kubenswrapper[4761]: I0307 08:42:05.580755 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547876-b8vfr"] Mar 07 08:42:05 crc kubenswrapper[4761]: I0307 08:42:05.594167 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547876-b8vfr"] Mar 07 08:42:05 crc kubenswrapper[4761]: I0307 08:42:05.717763 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e75faa3e-3ab5-4269-b968-2d7cdf2d4e67" path="/var/lib/kubelet/pods/e75faa3e-3ab5-4269-b968-2d7cdf2d4e67/volumes" Mar 07 08:42:13 crc kubenswrapper[4761]: I0307 08:42:13.768639 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:42:13 crc kubenswrapper[4761]: I0307 08:42:13.769149 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:42:20 crc kubenswrapper[4761]: I0307 08:42:20.364230 4761 scope.go:117] "RemoveContainer" containerID="c479cd71fe870c2b2433ab8369219c685056c34b9bd6dacda1943e5d420d633a" Mar 07 08:42:43 crc kubenswrapper[4761]: I0307 08:42:43.768471 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:42:43 crc kubenswrapper[4761]: I0307 08:42:43.769382 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:42:43 crc kubenswrapper[4761]: I0307 08:42:43.769433 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:42:43 crc kubenswrapper[4761]: I0307 08:42:43.770611 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:42:43 crc kubenswrapper[4761]: I0307 08:42:43.770688 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" gracePeriod=600 Mar 07 08:42:43 crc kubenswrapper[4761]: E0307 08:42:43.917471 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:42:44 crc kubenswrapper[4761]: I0307 08:42:44.341292 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" exitCode=0 Mar 07 08:42:44 crc kubenswrapper[4761]: I0307 08:42:44.341492 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca"} Mar 07 08:42:44 crc kubenswrapper[4761]: I0307 08:42:44.341676 4761 scope.go:117] "RemoveContainer" containerID="e9ae313dee187491879d92687a5c7e694903f5090225f2eed07b87d6931c5e34" Mar 07 08:42:44 crc kubenswrapper[4761]: I0307 08:42:44.342611 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:42:44 crc kubenswrapper[4761]: E0307 08:42:44.343084 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:42:57 crc kubenswrapper[4761]: E0307 08:42:57.729582 4761 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.150:40244->38.102.83.150:37445: write tcp 38.102.83.150:40244->38.102.83.150:37445: write: broken pipe Mar 07 08:42:59 crc kubenswrapper[4761]: I0307 08:42:59.706626 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:42:59 crc kubenswrapper[4761]: E0307 08:42:59.707711 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:43:13 crc kubenswrapper[4761]: I0307 08:43:13.719603 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:43:13 crc kubenswrapper[4761]: E0307 08:43:13.720340 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:43:25 crc kubenswrapper[4761]: I0307 08:43:25.706686 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:43:25 crc kubenswrapper[4761]: E0307 08:43:25.707774 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:43:36 crc kubenswrapper[4761]: I0307 08:43:36.706268 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:43:36 crc kubenswrapper[4761]: E0307 08:43:36.709365 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:43:48 crc kubenswrapper[4761]: I0307 08:43:48.706590 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:43:48 crc kubenswrapper[4761]: E0307 08:43:48.707395 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.166067 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547884-tb7dq"] Mar 07 08:44:00 crc kubenswrapper[4761]: E0307 08:44:00.167301 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729ebc86-ef22-4f0a-9ad4-e1a72a03fa48" containerName="oc" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.167330 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="729ebc86-ef22-4f0a-9ad4-e1a72a03fa48" containerName="oc" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.167620 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="729ebc86-ef22-4f0a-9ad4-e1a72a03fa48" containerName="oc" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.168520 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547884-tb7dq" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.173018 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.173104 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.173219 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.184463 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547884-tb7dq"] Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.267886 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msg4r\" (UniqueName: \"kubernetes.io/projected/62c2dec8-8f76-4d6d-9433-2476cb4461ff-kube-api-access-msg4r\") pod \"auto-csr-approver-29547884-tb7dq\" (UID: \"62c2dec8-8f76-4d6d-9433-2476cb4461ff\") " pod="openshift-infra/auto-csr-approver-29547884-tb7dq" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.369814 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msg4r\" (UniqueName: \"kubernetes.io/projected/62c2dec8-8f76-4d6d-9433-2476cb4461ff-kube-api-access-msg4r\") pod \"auto-csr-approver-29547884-tb7dq\" (UID: \"62c2dec8-8f76-4d6d-9433-2476cb4461ff\") " pod="openshift-infra/auto-csr-approver-29547884-tb7dq" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.404031 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msg4r\" (UniqueName: \"kubernetes.io/projected/62c2dec8-8f76-4d6d-9433-2476cb4461ff-kube-api-access-msg4r\") pod \"auto-csr-approver-29547884-tb7dq\" (UID: \"62c2dec8-8f76-4d6d-9433-2476cb4461ff\") " pod="openshift-infra/auto-csr-approver-29547884-tb7dq" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.500663 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547884-tb7dq" Mar 07 08:44:01 crc kubenswrapper[4761]: I0307 08:44:01.051489 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547884-tb7dq"] Mar 07 08:44:01 crc kubenswrapper[4761]: I0307 08:44:01.331451 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547884-tb7dq" event={"ID":"62c2dec8-8f76-4d6d-9433-2476cb4461ff","Type":"ContainerStarted","Data":"75b71aa5b749dbb02fc0861f567c82ce021c01ac0af52dc54752dcbb3e6b7b03"} Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:01.999008 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-66gmj"] Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.003774 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.010763 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-66gmj"] Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.121364 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-utilities\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.121847 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-catalog-content\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.121881 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xfq7\" (UniqueName: \"kubernetes.io/projected/b9c0a8a7-910a-4539-85c0-1296def3f2df-kube-api-access-6xfq7\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.223627 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-catalog-content\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.223669 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xfq7\" (UniqueName: \"kubernetes.io/projected/b9c0a8a7-910a-4539-85c0-1296def3f2df-kube-api-access-6xfq7\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.224088 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-utilities\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.229084 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-utilities\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.229300 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-catalog-content\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.259665 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xfq7\" (UniqueName: \"kubernetes.io/projected/b9c0a8a7-910a-4539-85c0-1296def3f2df-kube-api-access-6xfq7\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.337348 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.705805 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:44:02 crc kubenswrapper[4761]: E0307 08:44:02.706570 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.903081 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-66gmj"] Mar 07 08:44:03 crc kubenswrapper[4761]: I0307 08:44:03.364148 4761 generic.go:334] "Generic (PLEG): container finished" podID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerID="c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80" exitCode=0 Mar 07 08:44:03 crc kubenswrapper[4761]: I0307 08:44:03.364397 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66gmj" event={"ID":"b9c0a8a7-910a-4539-85c0-1296def3f2df","Type":"ContainerDied","Data":"c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80"} Mar 07 08:44:03 crc kubenswrapper[4761]: I0307 08:44:03.364525 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66gmj" event={"ID":"b9c0a8a7-910a-4539-85c0-1296def3f2df","Type":"ContainerStarted","Data":"53632f6edb9226e77a1b728f90c664bec05edc9ed36575f177f161863ce266c3"} Mar 07 08:44:03 crc kubenswrapper[4761]: I0307 08:44:03.374933 4761 generic.go:334] "Generic (PLEG): container finished" podID="62c2dec8-8f76-4d6d-9433-2476cb4461ff" containerID="2258f0b313e7184f46e0f7afe6a2b5a5dd2fe1c19534c07cc8e8a71ef95da1b8" exitCode=0 Mar 07 08:44:03 crc kubenswrapper[4761]: I0307 08:44:03.375005 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547884-tb7dq" event={"ID":"62c2dec8-8f76-4d6d-9433-2476cb4461ff","Type":"ContainerDied","Data":"2258f0b313e7184f46e0f7afe6a2b5a5dd2fe1c19534c07cc8e8a71ef95da1b8"} Mar 07 08:44:04 crc kubenswrapper[4761]: I0307 08:44:04.388308 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66gmj" event={"ID":"b9c0a8a7-910a-4539-85c0-1296def3f2df","Type":"ContainerStarted","Data":"bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea"} Mar 07 08:44:04 crc kubenswrapper[4761]: I0307 08:44:04.813422 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547884-tb7dq" Mar 07 08:44:04 crc kubenswrapper[4761]: I0307 08:44:04.917239 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msg4r\" (UniqueName: \"kubernetes.io/projected/62c2dec8-8f76-4d6d-9433-2476cb4461ff-kube-api-access-msg4r\") pod \"62c2dec8-8f76-4d6d-9433-2476cb4461ff\" (UID: \"62c2dec8-8f76-4d6d-9433-2476cb4461ff\") " Mar 07 08:44:04 crc kubenswrapper[4761]: I0307 08:44:04.926651 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62c2dec8-8f76-4d6d-9433-2476cb4461ff-kube-api-access-msg4r" (OuterVolumeSpecName: "kube-api-access-msg4r") pod "62c2dec8-8f76-4d6d-9433-2476cb4461ff" (UID: "62c2dec8-8f76-4d6d-9433-2476cb4461ff"). InnerVolumeSpecName "kube-api-access-msg4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:44:05 crc kubenswrapper[4761]: I0307 08:44:05.022261 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msg4r\" (UniqueName: \"kubernetes.io/projected/62c2dec8-8f76-4d6d-9433-2476cb4461ff-kube-api-access-msg4r\") on node \"crc\" DevicePath \"\"" Mar 07 08:44:05 crc kubenswrapper[4761]: I0307 08:44:05.405510 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547884-tb7dq" event={"ID":"62c2dec8-8f76-4d6d-9433-2476cb4461ff","Type":"ContainerDied","Data":"75b71aa5b749dbb02fc0861f567c82ce021c01ac0af52dc54752dcbb3e6b7b03"} Mar 07 08:44:05 crc kubenswrapper[4761]: I0307 08:44:05.406684 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75b71aa5b749dbb02fc0861f567c82ce021c01ac0af52dc54752dcbb3e6b7b03" Mar 07 08:44:05 crc kubenswrapper[4761]: I0307 08:44:05.405564 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547884-tb7dq" Mar 07 08:44:05 crc kubenswrapper[4761]: I0307 08:44:05.905954 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547878-rqgwh"] Mar 07 08:44:05 crc kubenswrapper[4761]: I0307 08:44:05.918039 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547878-rqgwh"] Mar 07 08:44:07 crc kubenswrapper[4761]: I0307 08:44:07.434304 4761 generic.go:334] "Generic (PLEG): container finished" podID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerID="bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea" exitCode=0 Mar 07 08:44:07 crc kubenswrapper[4761]: I0307 08:44:07.434422 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66gmj" event={"ID":"b9c0a8a7-910a-4539-85c0-1296def3f2df","Type":"ContainerDied","Data":"bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea"} Mar 07 08:44:07 crc kubenswrapper[4761]: I0307 08:44:07.732025 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04530ab-a8e3-4851-a74b-f33ead6584f2" path="/var/lib/kubelet/pods/c04530ab-a8e3-4851-a74b-f33ead6584f2/volumes" Mar 07 08:44:08 crc kubenswrapper[4761]: I0307 08:44:08.447276 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66gmj" event={"ID":"b9c0a8a7-910a-4539-85c0-1296def3f2df","Type":"ContainerStarted","Data":"087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432"} Mar 07 08:44:08 crc kubenswrapper[4761]: I0307 08:44:08.472491 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-66gmj" podStartSLOduration=2.954186071 podStartE2EDuration="7.472472613s" podCreationTimestamp="2026-03-07 08:44:01 +0000 UTC" firstStartedPulling="2026-03-07 08:44:03.366734082 +0000 UTC m=+3300.275900577" lastFinishedPulling="2026-03-07 08:44:07.885020604 +0000 UTC m=+3304.794187119" observedRunningTime="2026-03-07 08:44:08.466677738 +0000 UTC m=+3305.375844233" watchObservedRunningTime="2026-03-07 08:44:08.472472613 +0000 UTC m=+3305.381639108" Mar 07 08:44:12 crc kubenswrapper[4761]: I0307 08:44:12.338319 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:12 crc kubenswrapper[4761]: I0307 08:44:12.339061 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:13 crc kubenswrapper[4761]: I0307 08:44:13.401472 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-66gmj" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="registry-server" probeResult="failure" output=< Mar 07 08:44:13 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:44:13 crc kubenswrapper[4761]: > Mar 07 08:44:17 crc kubenswrapper[4761]: I0307 08:44:17.706739 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:44:17 crc kubenswrapper[4761]: E0307 08:44:17.708321 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:44:20 crc kubenswrapper[4761]: I0307 08:44:20.505922 4761 scope.go:117] "RemoveContainer" containerID="39df8a4331f8fb8c36a05aebc4c360e5a67d3b1a2496b4852eed73be235926a7" Mar 07 08:44:22 crc kubenswrapper[4761]: I0307 08:44:22.397037 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:22 crc kubenswrapper[4761]: I0307 08:44:22.477448 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:22 crc kubenswrapper[4761]: I0307 08:44:22.653926 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-66gmj"] Mar 07 08:44:23 crc kubenswrapper[4761]: I0307 08:44:23.663819 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-66gmj" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="registry-server" containerID="cri-o://087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432" gracePeriod=2 Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.155596 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.190755 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-catalog-content\") pod \"b9c0a8a7-910a-4539-85c0-1296def3f2df\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.191065 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xfq7\" (UniqueName: \"kubernetes.io/projected/b9c0a8a7-910a-4539-85c0-1296def3f2df-kube-api-access-6xfq7\") pod \"b9c0a8a7-910a-4539-85c0-1296def3f2df\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.191112 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-utilities\") pod \"b9c0a8a7-910a-4539-85c0-1296def3f2df\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.192492 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-utilities" (OuterVolumeSpecName: "utilities") pod "b9c0a8a7-910a-4539-85c0-1296def3f2df" (UID: "b9c0a8a7-910a-4539-85c0-1296def3f2df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.210024 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c0a8a7-910a-4539-85c0-1296def3f2df-kube-api-access-6xfq7" (OuterVolumeSpecName: "kube-api-access-6xfq7") pod "b9c0a8a7-910a-4539-85c0-1296def3f2df" (UID: "b9c0a8a7-910a-4539-85c0-1296def3f2df"). InnerVolumeSpecName "kube-api-access-6xfq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.267320 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9c0a8a7-910a-4539-85c0-1296def3f2df" (UID: "b9c0a8a7-910a-4539-85c0-1296def3f2df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.294103 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xfq7\" (UniqueName: \"kubernetes.io/projected/b9c0a8a7-910a-4539-85c0-1296def3f2df-kube-api-access-6xfq7\") on node \"crc\" DevicePath \"\"" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.294134 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.294143 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.679997 4761 generic.go:334] "Generic (PLEG): container finished" podID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerID="087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432" exitCode=0 Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.680045 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66gmj" event={"ID":"b9c0a8a7-910a-4539-85c0-1296def3f2df","Type":"ContainerDied","Data":"087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432"} Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.680067 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.680102 4761 scope.go:117] "RemoveContainer" containerID="087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.680079 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66gmj" event={"ID":"b9c0a8a7-910a-4539-85c0-1296def3f2df","Type":"ContainerDied","Data":"53632f6edb9226e77a1b728f90c664bec05edc9ed36575f177f161863ce266c3"} Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.724401 4761 scope.go:117] "RemoveContainer" containerID="bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.733783 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-66gmj"] Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.743505 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-66gmj"] Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.751460 4761 scope.go:117] "RemoveContainer" containerID="c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.827550 4761 scope.go:117] "RemoveContainer" containerID="087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432" Mar 07 08:44:24 crc kubenswrapper[4761]: E0307 08:44:24.828076 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432\": container with ID starting with 087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432 not found: ID does not exist" containerID="087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.828108 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432"} err="failed to get container status \"087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432\": rpc error: code = NotFound desc = could not find container \"087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432\": container with ID starting with 087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432 not found: ID does not exist" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.828129 4761 scope.go:117] "RemoveContainer" containerID="bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea" Mar 07 08:44:24 crc kubenswrapper[4761]: E0307 08:44:24.828640 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea\": container with ID starting with bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea not found: ID does not exist" containerID="bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.828666 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea"} err="failed to get container status \"bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea\": rpc error: code = NotFound desc = could not find container \"bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea\": container with ID starting with bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea not found: ID does not exist" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.828684 4761 scope.go:117] "RemoveContainer" containerID="c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80" Mar 07 08:44:24 crc kubenswrapper[4761]: E0307 08:44:24.829027 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80\": container with ID starting with c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80 not found: ID does not exist" containerID="c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.829061 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80"} err="failed to get container status \"c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80\": rpc error: code = NotFound desc = could not find container \"c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80\": container with ID starting with c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80 not found: ID does not exist" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.658180 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zlqtx"] Mar 07 08:44:25 crc kubenswrapper[4761]: E0307 08:44:25.659090 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="extract-utilities" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.659112 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="extract-utilities" Mar 07 08:44:25 crc kubenswrapper[4761]: E0307 08:44:25.659125 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="registry-server" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.659134 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="registry-server" Mar 07 08:44:25 crc kubenswrapper[4761]: E0307 08:44:25.659171 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="extract-content" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.659182 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="extract-content" Mar 07 08:44:25 crc kubenswrapper[4761]: E0307 08:44:25.659204 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c2dec8-8f76-4d6d-9433-2476cb4461ff" containerName="oc" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.659212 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c2dec8-8f76-4d6d-9433-2476cb4461ff" containerName="oc" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.659467 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c2dec8-8f76-4d6d-9433-2476cb4461ff" containerName="oc" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.659502 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="registry-server" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.662102 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.681247 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zlqtx"] Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.723434 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" path="/var/lib/kubelet/pods/b9c0a8a7-910a-4539-85c0-1296def3f2df/volumes" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.732531 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-utilities\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.732624 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvqbb\" (UniqueName: \"kubernetes.io/projected/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-kube-api-access-gvqbb\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.732776 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-catalog-content\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.835170 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-utilities\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.835314 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvqbb\" (UniqueName: \"kubernetes.io/projected/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-kube-api-access-gvqbb\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.835504 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-catalog-content\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.836398 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-catalog-content\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.836963 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-utilities\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.868143 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvqbb\" (UniqueName: \"kubernetes.io/projected/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-kube-api-access-gvqbb\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:26 crc kubenswrapper[4761]: I0307 08:44:26.001274 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:26 crc kubenswrapper[4761]: I0307 08:44:26.659596 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zlqtx"] Mar 07 08:44:26 crc kubenswrapper[4761]: W0307 08:44:26.664304 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac7f01f5_313b_4fad_ad4a_ec4c685ff4dd.slice/crio-f8652f5c0d2a5b74018f38c6af1a524fa67c36a7d0a5502ecabdd39325a03184 WatchSource:0}: Error finding container f8652f5c0d2a5b74018f38c6af1a524fa67c36a7d0a5502ecabdd39325a03184: Status 404 returned error can't find the container with id f8652f5c0d2a5b74018f38c6af1a524fa67c36a7d0a5502ecabdd39325a03184 Mar 07 08:44:26 crc kubenswrapper[4761]: I0307 08:44:26.741563 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlqtx" event={"ID":"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd","Type":"ContainerStarted","Data":"f8652f5c0d2a5b74018f38c6af1a524fa67c36a7d0a5502ecabdd39325a03184"} Mar 07 08:44:27 crc kubenswrapper[4761]: I0307 08:44:27.753050 4761 generic.go:334] "Generic (PLEG): container finished" podID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerID="2d2a1ab6deac4385522ed70a12680da4bfd1367457e606fa73854c1851b6599d" exitCode=0 Mar 07 08:44:27 crc kubenswrapper[4761]: I0307 08:44:27.753246 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlqtx" event={"ID":"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd","Type":"ContainerDied","Data":"2d2a1ab6deac4385522ed70a12680da4bfd1367457e606fa73854c1851b6599d"} Mar 07 08:44:28 crc kubenswrapper[4761]: I0307 08:44:28.765770 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlqtx" event={"ID":"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd","Type":"ContainerStarted","Data":"efbc418207a894ed80fa236f19791a7591e753be20b75898b7b052d5dcc30b71"} Mar 07 08:44:30 crc kubenswrapper[4761]: I0307 08:44:30.705971 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:44:30 crc kubenswrapper[4761]: E0307 08:44:30.708117 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:44:30 crc kubenswrapper[4761]: I0307 08:44:30.794205 4761 generic.go:334] "Generic (PLEG): container finished" podID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerID="efbc418207a894ed80fa236f19791a7591e753be20b75898b7b052d5dcc30b71" exitCode=0 Mar 07 08:44:30 crc kubenswrapper[4761]: I0307 08:44:30.794278 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlqtx" event={"ID":"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd","Type":"ContainerDied","Data":"efbc418207a894ed80fa236f19791a7591e753be20b75898b7b052d5dcc30b71"} Mar 07 08:44:31 crc kubenswrapper[4761]: I0307 08:44:31.806183 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlqtx" event={"ID":"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd","Type":"ContainerStarted","Data":"9ce8c525c6d33fd262221a4c1808e77db23f931b93ef8e75fefc6af96f44c90b"} Mar 07 08:44:31 crc kubenswrapper[4761]: I0307 08:44:31.845939 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zlqtx" podStartSLOduration=3.410255844 podStartE2EDuration="6.845917097s" podCreationTimestamp="2026-03-07 08:44:25 +0000 UTC" firstStartedPulling="2026-03-07 08:44:27.755176673 +0000 UTC m=+3324.664343158" lastFinishedPulling="2026-03-07 08:44:31.190837926 +0000 UTC m=+3328.100004411" observedRunningTime="2026-03-07 08:44:31.837502156 +0000 UTC m=+3328.746668651" watchObservedRunningTime="2026-03-07 08:44:31.845917097 +0000 UTC m=+3328.755083592" Mar 07 08:44:36 crc kubenswrapper[4761]: I0307 08:44:36.002709 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:36 crc kubenswrapper[4761]: I0307 08:44:36.003564 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:36 crc kubenswrapper[4761]: I0307 08:44:36.065168 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:36 crc kubenswrapper[4761]: I0307 08:44:36.937519 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:36 crc kubenswrapper[4761]: I0307 08:44:36.995802 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zlqtx"] Mar 07 08:44:38 crc kubenswrapper[4761]: I0307 08:44:38.876393 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zlqtx" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="registry-server" containerID="cri-o://9ce8c525c6d33fd262221a4c1808e77db23f931b93ef8e75fefc6af96f44c90b" gracePeriod=2 Mar 07 08:44:39 crc kubenswrapper[4761]: I0307 08:44:39.931870 4761 generic.go:334] "Generic (PLEG): container finished" podID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerID="9ce8c525c6d33fd262221a4c1808e77db23f931b93ef8e75fefc6af96f44c90b" exitCode=0 Mar 07 08:44:39 crc kubenswrapper[4761]: I0307 08:44:39.949989 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlqtx" event={"ID":"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd","Type":"ContainerDied","Data":"9ce8c525c6d33fd262221a4c1808e77db23f931b93ef8e75fefc6af96f44c90b"} Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.238182 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.383344 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-utilities\") pod \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.383397 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-catalog-content\") pod \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.383548 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvqbb\" (UniqueName: \"kubernetes.io/projected/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-kube-api-access-gvqbb\") pod \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.384138 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-utilities" (OuterVolumeSpecName: "utilities") pod "ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" (UID: "ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.389749 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-kube-api-access-gvqbb" (OuterVolumeSpecName: "kube-api-access-gvqbb") pod "ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" (UID: "ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd"). InnerVolumeSpecName "kube-api-access-gvqbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.448196 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" (UID: "ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.486124 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvqbb\" (UniqueName: \"kubernetes.io/projected/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-kube-api-access-gvqbb\") on node \"crc\" DevicePath \"\"" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.486157 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.486167 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.945970 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlqtx" event={"ID":"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd","Type":"ContainerDied","Data":"f8652f5c0d2a5b74018f38c6af1a524fa67c36a7d0a5502ecabdd39325a03184"} Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.947101 4761 scope.go:117] "RemoveContainer" containerID="9ce8c525c6d33fd262221a4c1808e77db23f931b93ef8e75fefc6af96f44c90b" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.947340 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.993114 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zlqtx"] Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.998402 4761 scope.go:117] "RemoveContainer" containerID="efbc418207a894ed80fa236f19791a7591e753be20b75898b7b052d5dcc30b71" Mar 07 08:44:41 crc kubenswrapper[4761]: I0307 08:44:41.013592 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zlqtx"] Mar 07 08:44:41 crc kubenswrapper[4761]: I0307 08:44:41.031541 4761 scope.go:117] "RemoveContainer" containerID="2d2a1ab6deac4385522ed70a12680da4bfd1367457e606fa73854c1851b6599d" Mar 07 08:44:41 crc kubenswrapper[4761]: I0307 08:44:41.716767 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" path="/var/lib/kubelet/pods/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd/volumes" Mar 07 08:44:42 crc kubenswrapper[4761]: I0307 08:44:42.705865 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:44:42 crc kubenswrapper[4761]: E0307 08:44:42.706411 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:44:57 crc kubenswrapper[4761]: I0307 08:44:57.705478 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:44:57 crc kubenswrapper[4761]: E0307 08:44:57.706310 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.170840 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw"] Mar 07 08:45:00 crc kubenswrapper[4761]: E0307 08:45:00.172366 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="extract-content" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.172390 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="extract-content" Mar 07 08:45:00 crc kubenswrapper[4761]: E0307 08:45:00.172418 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="extract-utilities" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.172434 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="extract-utilities" Mar 07 08:45:00 crc kubenswrapper[4761]: E0307 08:45:00.172467 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="registry-server" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.172481 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="registry-server" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.172976 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="registry-server" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.174318 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.176348 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.178566 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.185788 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw"] Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.232310 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7qhn\" (UniqueName: \"kubernetes.io/projected/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-kube-api-access-v7qhn\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.232978 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-secret-volume\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.233303 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-config-volume\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.335302 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-config-volume\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.335477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7qhn\" (UniqueName: \"kubernetes.io/projected/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-kube-api-access-v7qhn\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.335519 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-secret-volume\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.337009 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-config-volume\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.348245 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-secret-volume\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.353063 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7qhn\" (UniqueName: \"kubernetes.io/projected/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-kube-api-access-v7qhn\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.506571 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:01 crc kubenswrapper[4761]: I0307 08:45:01.011569 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw"] Mar 07 08:45:01 crc kubenswrapper[4761]: I0307 08:45:01.237374 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" event={"ID":"12916c4b-c46a-4104-8a61-c4ca5e3cfb96","Type":"ContainerStarted","Data":"fae009b0563d2ba9916704a95f6f2dc9f06f79557985aa9312d39d3aa19fe6b5"} Mar 07 08:45:02 crc kubenswrapper[4761]: I0307 08:45:02.250439 4761 generic.go:334] "Generic (PLEG): container finished" podID="12916c4b-c46a-4104-8a61-c4ca5e3cfb96" containerID="61984e3e86b5cb7d7262d6662bcd7e8f45cbcda629a21aa027cdcdab8daa0178" exitCode=0 Mar 07 08:45:02 crc kubenswrapper[4761]: I0307 08:45:02.250499 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" event={"ID":"12916c4b-c46a-4104-8a61-c4ca5e3cfb96","Type":"ContainerDied","Data":"61984e3e86b5cb7d7262d6662bcd7e8f45cbcda629a21aa027cdcdab8daa0178"} Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.741865 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.825770 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7qhn\" (UniqueName: \"kubernetes.io/projected/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-kube-api-access-v7qhn\") pod \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.826345 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-config-volume\") pod \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.826651 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-secret-volume\") pod \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.827085 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-config-volume" (OuterVolumeSpecName: "config-volume") pod "12916c4b-c46a-4104-8a61-c4ca5e3cfb96" (UID: "12916c4b-c46a-4104-8a61-c4ca5e3cfb96"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.827778 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.833294 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "12916c4b-c46a-4104-8a61-c4ca5e3cfb96" (UID: "12916c4b-c46a-4104-8a61-c4ca5e3cfb96"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.841264 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-kube-api-access-v7qhn" (OuterVolumeSpecName: "kube-api-access-v7qhn") pod "12916c4b-c46a-4104-8a61-c4ca5e3cfb96" (UID: "12916c4b-c46a-4104-8a61-c4ca5e3cfb96"). InnerVolumeSpecName "kube-api-access-v7qhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.930362 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.930419 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7qhn\" (UniqueName: \"kubernetes.io/projected/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-kube-api-access-v7qhn\") on node \"crc\" DevicePath \"\"" Mar 07 08:45:04 crc kubenswrapper[4761]: I0307 08:45:04.274536 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" event={"ID":"12916c4b-c46a-4104-8a61-c4ca5e3cfb96","Type":"ContainerDied","Data":"fae009b0563d2ba9916704a95f6f2dc9f06f79557985aa9312d39d3aa19fe6b5"} Mar 07 08:45:04 crc kubenswrapper[4761]: I0307 08:45:04.274576 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fae009b0563d2ba9916704a95f6f2dc9f06f79557985aa9312d39d3aa19fe6b5" Mar 07 08:45:04 crc kubenswrapper[4761]: I0307 08:45:04.274592 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:04 crc kubenswrapper[4761]: I0307 08:45:04.836828 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv"] Mar 07 08:45:04 crc kubenswrapper[4761]: I0307 08:45:04.853326 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv"] Mar 07 08:45:05 crc kubenswrapper[4761]: I0307 08:45:05.723821 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4ef27e8-2f95-4794-a265-433ecf982772" path="/var/lib/kubelet/pods/b4ef27e8-2f95-4794-a265-433ecf982772/volumes" Mar 07 08:45:08 crc kubenswrapper[4761]: I0307 08:45:08.706495 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:45:08 crc kubenswrapper[4761]: E0307 08:45:08.707389 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:45:20 crc kubenswrapper[4761]: I0307 08:45:20.625864 4761 scope.go:117] "RemoveContainer" containerID="b3cf6b989ce07e65ba7db0ae4f80ce2dbf0060700b3790a4425415dd17be1577" Mar 07 08:45:22 crc kubenswrapper[4761]: I0307 08:45:22.706136 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:45:22 crc kubenswrapper[4761]: E0307 08:45:22.706884 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:45:36 crc kubenswrapper[4761]: I0307 08:45:36.115775 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:45:36 crc kubenswrapper[4761]: E0307 08:45:36.116545 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:45:50 crc kubenswrapper[4761]: I0307 08:45:50.706495 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:45:50 crc kubenswrapper[4761]: E0307 08:45:50.707305 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.155372 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547886-7h9n2"] Mar 07 08:46:00 crc kubenswrapper[4761]: E0307 08:46:00.156594 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12916c4b-c46a-4104-8a61-c4ca5e3cfb96" containerName="collect-profiles" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.156612 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="12916c4b-c46a-4104-8a61-c4ca5e3cfb96" containerName="collect-profiles" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.156938 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="12916c4b-c46a-4104-8a61-c4ca5e3cfb96" containerName="collect-profiles" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.158062 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547886-7h9n2" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.160495 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.161374 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.165868 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547886-7h9n2"] Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.171871 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.347602 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4sn4\" (UniqueName: \"kubernetes.io/projected/ea7d82cc-2886-4165-adeb-f5c619e985d3-kube-api-access-q4sn4\") pod \"auto-csr-approver-29547886-7h9n2\" (UID: \"ea7d82cc-2886-4165-adeb-f5c619e985d3\") " pod="openshift-infra/auto-csr-approver-29547886-7h9n2" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.450650 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4sn4\" (UniqueName: \"kubernetes.io/projected/ea7d82cc-2886-4165-adeb-f5c619e985d3-kube-api-access-q4sn4\") pod \"auto-csr-approver-29547886-7h9n2\" (UID: \"ea7d82cc-2886-4165-adeb-f5c619e985d3\") " pod="openshift-infra/auto-csr-approver-29547886-7h9n2" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.484328 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4sn4\" (UniqueName: \"kubernetes.io/projected/ea7d82cc-2886-4165-adeb-f5c619e985d3-kube-api-access-q4sn4\") pod \"auto-csr-approver-29547886-7h9n2\" (UID: \"ea7d82cc-2886-4165-adeb-f5c619e985d3\") " pod="openshift-infra/auto-csr-approver-29547886-7h9n2" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.786044 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547886-7h9n2" Mar 07 08:46:01 crc kubenswrapper[4761]: I0307 08:46:01.291059 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547886-7h9n2"] Mar 07 08:46:02 crc kubenswrapper[4761]: I0307 08:46:02.126194 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547886-7h9n2" event={"ID":"ea7d82cc-2886-4165-adeb-f5c619e985d3","Type":"ContainerStarted","Data":"0f90e8207f80a4e5376d84338a627f5415aa0f0f83eac065f4e994fcdefeba7f"} Mar 07 08:46:03 crc kubenswrapper[4761]: I0307 08:46:03.142769 4761 generic.go:334] "Generic (PLEG): container finished" podID="ea7d82cc-2886-4165-adeb-f5c619e985d3" containerID="ac108cfd8711bdc247b736dad6e49c69e12c1588416ca0d497290394ba52eb0e" exitCode=0 Mar 07 08:46:03 crc kubenswrapper[4761]: I0307 08:46:03.143280 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547886-7h9n2" event={"ID":"ea7d82cc-2886-4165-adeb-f5c619e985d3","Type":"ContainerDied","Data":"ac108cfd8711bdc247b736dad6e49c69e12c1588416ca0d497290394ba52eb0e"} Mar 07 08:46:04 crc kubenswrapper[4761]: I0307 08:46:04.621430 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547886-7h9n2" Mar 07 08:46:04 crc kubenswrapper[4761]: I0307 08:46:04.708109 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4sn4\" (UniqueName: \"kubernetes.io/projected/ea7d82cc-2886-4165-adeb-f5c619e985d3-kube-api-access-q4sn4\") pod \"ea7d82cc-2886-4165-adeb-f5c619e985d3\" (UID: \"ea7d82cc-2886-4165-adeb-f5c619e985d3\") " Mar 07 08:46:04 crc kubenswrapper[4761]: I0307 08:46:04.713944 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7d82cc-2886-4165-adeb-f5c619e985d3-kube-api-access-q4sn4" (OuterVolumeSpecName: "kube-api-access-q4sn4") pod "ea7d82cc-2886-4165-adeb-f5c619e985d3" (UID: "ea7d82cc-2886-4165-adeb-f5c619e985d3"). InnerVolumeSpecName "kube-api-access-q4sn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:46:04 crc kubenswrapper[4761]: I0307 08:46:04.814056 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4sn4\" (UniqueName: \"kubernetes.io/projected/ea7d82cc-2886-4165-adeb-f5c619e985d3-kube-api-access-q4sn4\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:05 crc kubenswrapper[4761]: I0307 08:46:05.172083 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547886-7h9n2" event={"ID":"ea7d82cc-2886-4165-adeb-f5c619e985d3","Type":"ContainerDied","Data":"0f90e8207f80a4e5376d84338a627f5415aa0f0f83eac065f4e994fcdefeba7f"} Mar 07 08:46:05 crc kubenswrapper[4761]: I0307 08:46:05.172260 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f90e8207f80a4e5376d84338a627f5415aa0f0f83eac065f4e994fcdefeba7f" Mar 07 08:46:05 crc kubenswrapper[4761]: I0307 08:46:05.172345 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547886-7h9n2" Mar 07 08:46:05 crc kubenswrapper[4761]: I0307 08:46:05.697545 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547880-4p5x8"] Mar 07 08:46:05 crc kubenswrapper[4761]: I0307 08:46:05.707005 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:46:05 crc kubenswrapper[4761]: E0307 08:46:05.707300 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:46:05 crc kubenswrapper[4761]: I0307 08:46:05.718866 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547880-4p5x8"] Mar 07 08:46:07 crc kubenswrapper[4761]: I0307 08:46:07.721258 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26fb18a8-5400-4b0e-9f6f-47ad9c34e855" path="/var/lib/kubelet/pods/26fb18a8-5400-4b0e-9f6f-47ad9c34e855/volumes" Mar 07 08:46:20 crc kubenswrapper[4761]: I0307 08:46:20.705775 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:46:20 crc kubenswrapper[4761]: E0307 08:46:20.706626 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:46:20 crc kubenswrapper[4761]: I0307 08:46:20.741487 4761 scope.go:117] "RemoveContainer" containerID="5645833573c137c62acdca0e5dbcbaf1825a9c618414f8121090966cb4f346a1" Mar 07 08:46:31 crc kubenswrapper[4761]: I0307 08:46:31.705758 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:46:31 crc kubenswrapper[4761]: E0307 08:46:31.706491 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:46:45 crc kubenswrapper[4761]: I0307 08:46:45.707110 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:46:45 crc kubenswrapper[4761]: E0307 08:46:45.708180 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:46:57 crc kubenswrapper[4761]: I0307 08:46:57.706037 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:46:57 crc kubenswrapper[4761]: E0307 08:46:57.707107 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:47:11 crc kubenswrapper[4761]: I0307 08:47:11.706269 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:47:11 crc kubenswrapper[4761]: E0307 08:47:11.707532 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:47:25 crc kubenswrapper[4761]: I0307 08:47:25.707582 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:47:25 crc kubenswrapper[4761]: E0307 08:47:25.708406 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:47:38 crc kubenswrapper[4761]: I0307 08:47:38.706024 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:47:38 crc kubenswrapper[4761]: E0307 08:47:38.707298 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:47:51 crc kubenswrapper[4761]: I0307 08:47:51.705408 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:47:52 crc kubenswrapper[4761]: I0307 08:47:52.492783 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"ef65b2e950cadd2d0ba24302769cb6d9be0b6d35f82ee72399a22260360a7b43"} Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.079160 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sxbnx"] Mar 07 08:47:54 crc kubenswrapper[4761]: E0307 08:47:54.080177 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7d82cc-2886-4165-adeb-f5c619e985d3" containerName="oc" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.080194 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7d82cc-2886-4165-adeb-f5c619e985d3" containerName="oc" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.080432 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7d82cc-2886-4165-adeb-f5c619e985d3" containerName="oc" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.085283 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.121422 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sxbnx"] Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.269733 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-catalog-content\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.269818 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd8bn\" (UniqueName: \"kubernetes.io/projected/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-kube-api-access-rd8bn\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.270027 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-utilities\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.372395 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-catalog-content\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.372681 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd8bn\" (UniqueName: \"kubernetes.io/projected/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-kube-api-access-rd8bn\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.372838 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-utilities\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.373111 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-catalog-content\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.373214 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-utilities\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.394922 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd8bn\" (UniqueName: \"kubernetes.io/projected/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-kube-api-access-rd8bn\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.410260 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.998173 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sxbnx"] Mar 07 08:47:55 crc kubenswrapper[4761]: W0307 08:47:55.001107 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea3a7d76_259b_48dd_bdaa_3f9eb828f201.slice/crio-6f02430aea9c431c280799072cf37db6ebbf66360dd99411015fd1f7f3c786c9 WatchSource:0}: Error finding container 6f02430aea9c431c280799072cf37db6ebbf66360dd99411015fd1f7f3c786c9: Status 404 returned error can't find the container with id 6f02430aea9c431c280799072cf37db6ebbf66360dd99411015fd1f7f3c786c9 Mar 07 08:47:55 crc kubenswrapper[4761]: I0307 08:47:55.525975 4761 generic.go:334] "Generic (PLEG): container finished" podID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerID="4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6" exitCode=0 Mar 07 08:47:55 crc kubenswrapper[4761]: I0307 08:47:55.526135 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbnx" event={"ID":"ea3a7d76-259b-48dd-bdaa-3f9eb828f201","Type":"ContainerDied","Data":"4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6"} Mar 07 08:47:55 crc kubenswrapper[4761]: I0307 08:47:55.526538 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbnx" event={"ID":"ea3a7d76-259b-48dd-bdaa-3f9eb828f201","Type":"ContainerStarted","Data":"6f02430aea9c431c280799072cf37db6ebbf66360dd99411015fd1f7f3c786c9"} Mar 07 08:47:55 crc kubenswrapper[4761]: I0307 08:47:55.529998 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:47:56 crc kubenswrapper[4761]: I0307 08:47:56.541043 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbnx" event={"ID":"ea3a7d76-259b-48dd-bdaa-3f9eb828f201","Type":"ContainerStarted","Data":"85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f"} Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.150940 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547888-8x864"] Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.153132 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547888-8x864" Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.155792 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.156122 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.156755 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.164904 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547888-8x864"] Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.329383 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m48c8\" (UniqueName: \"kubernetes.io/projected/d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f-kube-api-access-m48c8\") pod \"auto-csr-approver-29547888-8x864\" (UID: \"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f\") " pod="openshift-infra/auto-csr-approver-29547888-8x864" Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.432283 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m48c8\" (UniqueName: \"kubernetes.io/projected/d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f-kube-api-access-m48c8\") pod \"auto-csr-approver-29547888-8x864\" (UID: \"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f\") " pod="openshift-infra/auto-csr-approver-29547888-8x864" Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.505744 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m48c8\" (UniqueName: \"kubernetes.io/projected/d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f-kube-api-access-m48c8\") pod \"auto-csr-approver-29547888-8x864\" (UID: \"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f\") " pod="openshift-infra/auto-csr-approver-29547888-8x864" Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.771455 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547888-8x864" Mar 07 08:48:01 crc kubenswrapper[4761]: I0307 08:48:01.312395 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547888-8x864"] Mar 07 08:48:01 crc kubenswrapper[4761]: I0307 08:48:01.607143 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547888-8x864" event={"ID":"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f","Type":"ContainerStarted","Data":"af5f6d0035fac7bbd8f33f987db4d040cd74e86b09fd6c592a3c7515c214c583"} Mar 07 08:48:02 crc kubenswrapper[4761]: I0307 08:48:02.619574 4761 generic.go:334] "Generic (PLEG): container finished" podID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerID="85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f" exitCode=0 Mar 07 08:48:02 crc kubenswrapper[4761]: I0307 08:48:02.619623 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbnx" event={"ID":"ea3a7d76-259b-48dd-bdaa-3f9eb828f201","Type":"ContainerDied","Data":"85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f"} Mar 07 08:48:03 crc kubenswrapper[4761]: I0307 08:48:03.633207 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbnx" event={"ID":"ea3a7d76-259b-48dd-bdaa-3f9eb828f201","Type":"ContainerStarted","Data":"1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9"} Mar 07 08:48:03 crc kubenswrapper[4761]: I0307 08:48:03.635479 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547888-8x864" event={"ID":"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f","Type":"ContainerStarted","Data":"cda9fe944526e544b8a82b62eb8056d564d5c6bed8acb7abdac1932157cdb6c6"} Mar 07 08:48:03 crc kubenswrapper[4761]: I0307 08:48:03.657217 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sxbnx" podStartSLOduration=1.813874647 podStartE2EDuration="9.657191863s" podCreationTimestamp="2026-03-07 08:47:54 +0000 UTC" firstStartedPulling="2026-03-07 08:47:55.529789429 +0000 UTC m=+3532.438955904" lastFinishedPulling="2026-03-07 08:48:03.373106645 +0000 UTC m=+3540.282273120" observedRunningTime="2026-03-07 08:48:03.650427084 +0000 UTC m=+3540.559593569" watchObservedRunningTime="2026-03-07 08:48:03.657191863 +0000 UTC m=+3540.566358348" Mar 07 08:48:03 crc kubenswrapper[4761]: I0307 08:48:03.672376 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547888-8x864" podStartSLOduration=2.222233508 podStartE2EDuration="3.672356482s" podCreationTimestamp="2026-03-07 08:48:00 +0000 UTC" firstStartedPulling="2026-03-07 08:48:01.31345863 +0000 UTC m=+3538.222625105" lastFinishedPulling="2026-03-07 08:48:02.763581604 +0000 UTC m=+3539.672748079" observedRunningTime="2026-03-07 08:48:03.665099101 +0000 UTC m=+3540.574265576" watchObservedRunningTime="2026-03-07 08:48:03.672356482 +0000 UTC m=+3540.581522967" Mar 07 08:48:04 crc kubenswrapper[4761]: I0307 08:48:04.411179 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:48:04 crc kubenswrapper[4761]: I0307 08:48:04.411464 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:48:04 crc kubenswrapper[4761]: I0307 08:48:04.651951 4761 generic.go:334] "Generic (PLEG): container finished" podID="d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f" containerID="cda9fe944526e544b8a82b62eb8056d564d5c6bed8acb7abdac1932157cdb6c6" exitCode=0 Mar 07 08:48:04 crc kubenswrapper[4761]: I0307 08:48:04.652026 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547888-8x864" event={"ID":"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f","Type":"ContainerDied","Data":"cda9fe944526e544b8a82b62eb8056d564d5c6bed8acb7abdac1932157cdb6c6"} Mar 07 08:48:05 crc kubenswrapper[4761]: I0307 08:48:05.529740 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sxbnx" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="registry-server" probeResult="failure" output=< Mar 07 08:48:05 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:48:05 crc kubenswrapper[4761]: > Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.145932 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547888-8x864" Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.271358 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m48c8\" (UniqueName: \"kubernetes.io/projected/d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f-kube-api-access-m48c8\") pod \"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f\" (UID: \"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f\") " Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.281090 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f-kube-api-access-m48c8" (OuterVolumeSpecName: "kube-api-access-m48c8") pod "d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f" (UID: "d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f"). InnerVolumeSpecName "kube-api-access-m48c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.374221 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m48c8\" (UniqueName: \"kubernetes.io/projected/d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f-kube-api-access-m48c8\") on node \"crc\" DevicePath \"\"" Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.681839 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547888-8x864" event={"ID":"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f","Type":"ContainerDied","Data":"af5f6d0035fac7bbd8f33f987db4d040cd74e86b09fd6c592a3c7515c214c583"} Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.681905 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5f6d0035fac7bbd8f33f987db4d040cd74e86b09fd6c592a3c7515c214c583" Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.681992 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547888-8x864" Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.755964 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547882-pvsvc"] Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.770211 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547882-pvsvc"] Mar 07 08:48:07 crc kubenswrapper[4761]: I0307 08:48:07.720347 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="729ebc86-ef22-4f0a-9ad4-e1a72a03fa48" path="/var/lib/kubelet/pods/729ebc86-ef22-4f0a-9ad4-e1a72a03fa48/volumes" Mar 07 08:48:15 crc kubenswrapper[4761]: I0307 08:48:15.511092 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sxbnx" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="registry-server" probeResult="failure" output=< Mar 07 08:48:15 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:48:15 crc kubenswrapper[4761]: > Mar 07 08:48:20 crc kubenswrapper[4761]: I0307 08:48:20.868477 4761 scope.go:117] "RemoveContainer" containerID="308e64b1726bbae20e797d0badc3a6d633889e1dd0fb91ebac20a675b32d7de8" Mar 07 08:48:25 crc kubenswrapper[4761]: I0307 08:48:25.476331 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sxbnx" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="registry-server" probeResult="failure" output=< Mar 07 08:48:25 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:48:25 crc kubenswrapper[4761]: > Mar 07 08:48:34 crc kubenswrapper[4761]: I0307 08:48:34.463494 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:48:34 crc kubenswrapper[4761]: I0307 08:48:34.542999 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:48:34 crc kubenswrapper[4761]: I0307 08:48:34.713416 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sxbnx"] Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.032220 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sxbnx" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="registry-server" containerID="cri-o://1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9" gracePeriod=2 Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.655050 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.793258 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-utilities\") pod \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.793440 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd8bn\" (UniqueName: \"kubernetes.io/projected/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-kube-api-access-rd8bn\") pod \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.793511 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-catalog-content\") pod \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.794672 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-utilities" (OuterVolumeSpecName: "utilities") pod "ea3a7d76-259b-48dd-bdaa-3f9eb828f201" (UID: "ea3a7d76-259b-48dd-bdaa-3f9eb828f201"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.799503 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-kube-api-access-rd8bn" (OuterVolumeSpecName: "kube-api-access-rd8bn") pod "ea3a7d76-259b-48dd-bdaa-3f9eb828f201" (UID: "ea3a7d76-259b-48dd-bdaa-3f9eb828f201"). InnerVolumeSpecName "kube-api-access-rd8bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.918210 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd8bn\" (UniqueName: \"kubernetes.io/projected/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-kube-api-access-rd8bn\") on node \"crc\" DevicePath \"\"" Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.918446 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.965194 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea3a7d76-259b-48dd-bdaa-3f9eb828f201" (UID: "ea3a7d76-259b-48dd-bdaa-3f9eb828f201"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.020553 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.046971 4761 generic.go:334] "Generic (PLEG): container finished" podID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerID="1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9" exitCode=0 Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.047046 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbnx" event={"ID":"ea3a7d76-259b-48dd-bdaa-3f9eb828f201","Type":"ContainerDied","Data":"1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9"} Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.047076 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbnx" event={"ID":"ea3a7d76-259b-48dd-bdaa-3f9eb828f201","Type":"ContainerDied","Data":"6f02430aea9c431c280799072cf37db6ebbf66360dd99411015fd1f7f3c786c9"} Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.047095 4761 scope.go:117] "RemoveContainer" containerID="1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.047285 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.108772 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sxbnx"] Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.132010 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sxbnx"] Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.149870 4761 scope.go:117] "RemoveContainer" containerID="85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.180629 4761 scope.go:117] "RemoveContainer" containerID="4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.259301 4761 scope.go:117] "RemoveContainer" containerID="1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9" Mar 07 08:48:37 crc kubenswrapper[4761]: E0307 08:48:37.259828 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9\": container with ID starting with 1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9 not found: ID does not exist" containerID="1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.259874 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9"} err="failed to get container status \"1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9\": rpc error: code = NotFound desc = could not find container \"1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9\": container with ID starting with 1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9 not found: ID does not exist" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.259901 4761 scope.go:117] "RemoveContainer" containerID="85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f" Mar 07 08:48:37 crc kubenswrapper[4761]: E0307 08:48:37.260273 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f\": container with ID starting with 85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f not found: ID does not exist" containerID="85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.260306 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f"} err="failed to get container status \"85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f\": rpc error: code = NotFound desc = could not find container \"85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f\": container with ID starting with 85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f not found: ID does not exist" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.260342 4761 scope.go:117] "RemoveContainer" containerID="4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6" Mar 07 08:48:37 crc kubenswrapper[4761]: E0307 08:48:37.260528 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6\": container with ID starting with 4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6 not found: ID does not exist" containerID="4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.260552 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6"} err="failed to get container status \"4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6\": rpc error: code = NotFound desc = could not find container \"4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6\": container with ID starting with 4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6 not found: ID does not exist" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.718182 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" path="/var/lib/kubelet/pods/ea3a7d76-259b-48dd-bdaa-3f9eb828f201/volumes" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.142271 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547890-4n5w5"] Mar 07 08:50:00 crc kubenswrapper[4761]: E0307 08:50:00.143242 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="extract-utilities" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.143256 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="extract-utilities" Mar 07 08:50:00 crc kubenswrapper[4761]: E0307 08:50:00.143292 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="registry-server" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.143298 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="registry-server" Mar 07 08:50:00 crc kubenswrapper[4761]: E0307 08:50:00.143312 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f" containerName="oc" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.143318 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f" containerName="oc" Mar 07 08:50:00 crc kubenswrapper[4761]: E0307 08:50:00.143332 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="extract-content" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.143338 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="extract-content" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.143538 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f" containerName="oc" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.143554 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="registry-server" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.144360 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547890-4n5w5" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.148108 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.150403 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.150852 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.159919 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547890-4n5w5"] Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.252143 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkvb7\" (UniqueName: \"kubernetes.io/projected/c12dbae5-26b3-47ac-8709-9d6609dabbdf-kube-api-access-bkvb7\") pod \"auto-csr-approver-29547890-4n5w5\" (UID: \"c12dbae5-26b3-47ac-8709-9d6609dabbdf\") " pod="openshift-infra/auto-csr-approver-29547890-4n5w5" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.355119 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkvb7\" (UniqueName: \"kubernetes.io/projected/c12dbae5-26b3-47ac-8709-9d6609dabbdf-kube-api-access-bkvb7\") pod \"auto-csr-approver-29547890-4n5w5\" (UID: \"c12dbae5-26b3-47ac-8709-9d6609dabbdf\") " pod="openshift-infra/auto-csr-approver-29547890-4n5w5" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.380256 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkvb7\" (UniqueName: \"kubernetes.io/projected/c12dbae5-26b3-47ac-8709-9d6609dabbdf-kube-api-access-bkvb7\") pod \"auto-csr-approver-29547890-4n5w5\" (UID: \"c12dbae5-26b3-47ac-8709-9d6609dabbdf\") " pod="openshift-infra/auto-csr-approver-29547890-4n5w5" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.467754 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547890-4n5w5" Mar 07 08:50:01 crc kubenswrapper[4761]: I0307 08:50:01.022349 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547890-4n5w5"] Mar 07 08:50:01 crc kubenswrapper[4761]: I0307 08:50:01.081183 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547890-4n5w5" event={"ID":"c12dbae5-26b3-47ac-8709-9d6609dabbdf","Type":"ContainerStarted","Data":"c9a5d58c75c7162703a8a6c02be7b062c810e186b24d1c195f462e20119937a3"} Mar 07 08:50:03 crc kubenswrapper[4761]: I0307 08:50:03.110287 4761 generic.go:334] "Generic (PLEG): container finished" podID="c12dbae5-26b3-47ac-8709-9d6609dabbdf" containerID="33342531c4e730445c577ee05f6170f96c2fe0e2e49bbda868bde245eb6c34f9" exitCode=0 Mar 07 08:50:03 crc kubenswrapper[4761]: I0307 08:50:03.110424 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547890-4n5w5" event={"ID":"c12dbae5-26b3-47ac-8709-9d6609dabbdf","Type":"ContainerDied","Data":"33342531c4e730445c577ee05f6170f96c2fe0e2e49bbda868bde245eb6c34f9"} Mar 07 08:50:04 crc kubenswrapper[4761]: I0307 08:50:04.555093 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547890-4n5w5" Mar 07 08:50:04 crc kubenswrapper[4761]: I0307 08:50:04.662767 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkvb7\" (UniqueName: \"kubernetes.io/projected/c12dbae5-26b3-47ac-8709-9d6609dabbdf-kube-api-access-bkvb7\") pod \"c12dbae5-26b3-47ac-8709-9d6609dabbdf\" (UID: \"c12dbae5-26b3-47ac-8709-9d6609dabbdf\") " Mar 07 08:50:04 crc kubenswrapper[4761]: I0307 08:50:04.671109 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c12dbae5-26b3-47ac-8709-9d6609dabbdf-kube-api-access-bkvb7" (OuterVolumeSpecName: "kube-api-access-bkvb7") pod "c12dbae5-26b3-47ac-8709-9d6609dabbdf" (UID: "c12dbae5-26b3-47ac-8709-9d6609dabbdf"). InnerVolumeSpecName "kube-api-access-bkvb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:50:04 crc kubenswrapper[4761]: I0307 08:50:04.765554 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkvb7\" (UniqueName: \"kubernetes.io/projected/c12dbae5-26b3-47ac-8709-9d6609dabbdf-kube-api-access-bkvb7\") on node \"crc\" DevicePath \"\"" Mar 07 08:50:05 crc kubenswrapper[4761]: I0307 08:50:05.136343 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547890-4n5w5" event={"ID":"c12dbae5-26b3-47ac-8709-9d6609dabbdf","Type":"ContainerDied","Data":"c9a5d58c75c7162703a8a6c02be7b062c810e186b24d1c195f462e20119937a3"} Mar 07 08:50:05 crc kubenswrapper[4761]: I0307 08:50:05.136389 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9a5d58c75c7162703a8a6c02be7b062c810e186b24d1c195f462e20119937a3" Mar 07 08:50:05 crc kubenswrapper[4761]: I0307 08:50:05.136456 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547890-4n5w5" Mar 07 08:50:05 crc kubenswrapper[4761]: I0307 08:50:05.642567 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547884-tb7dq"] Mar 07 08:50:05 crc kubenswrapper[4761]: I0307 08:50:05.659108 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547884-tb7dq"] Mar 07 08:50:05 crc kubenswrapper[4761]: I0307 08:50:05.730561 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62c2dec8-8f76-4d6d-9433-2476cb4461ff" path="/var/lib/kubelet/pods/62c2dec8-8f76-4d6d-9433-2476cb4461ff/volumes" Mar 07 08:50:13 crc kubenswrapper[4761]: I0307 08:50:13.768559 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:50:13 crc kubenswrapper[4761]: I0307 08:50:13.769182 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:50:21 crc kubenswrapper[4761]: I0307 08:50:21.054208 4761 scope.go:117] "RemoveContainer" containerID="2258f0b313e7184f46e0f7afe6a2b5a5dd2fe1c19534c07cc8e8a71ef95da1b8" Mar 07 08:50:43 crc kubenswrapper[4761]: I0307 08:50:43.768766 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:50:43 crc kubenswrapper[4761]: I0307 08:50:43.769430 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:51:13 crc kubenswrapper[4761]: I0307 08:51:13.769056 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:51:13 crc kubenswrapper[4761]: I0307 08:51:13.769576 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:51:13 crc kubenswrapper[4761]: I0307 08:51:13.769612 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:51:13 crc kubenswrapper[4761]: I0307 08:51:13.773515 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef65b2e950cadd2d0ba24302769cb6d9be0b6d35f82ee72399a22260360a7b43"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:51:13 crc kubenswrapper[4761]: I0307 08:51:13.773640 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://ef65b2e950cadd2d0ba24302769cb6d9be0b6d35f82ee72399a22260360a7b43" gracePeriod=600 Mar 07 08:51:14 crc kubenswrapper[4761]: I0307 08:51:14.047395 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="ef65b2e950cadd2d0ba24302769cb6d9be0b6d35f82ee72399a22260360a7b43" exitCode=0 Mar 07 08:51:14 crc kubenswrapper[4761]: I0307 08:51:14.047437 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"ef65b2e950cadd2d0ba24302769cb6d9be0b6d35f82ee72399a22260360a7b43"} Mar 07 08:51:14 crc kubenswrapper[4761]: I0307 08:51:14.047481 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:51:15 crc kubenswrapper[4761]: I0307 08:51:15.068872 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803"} Mar 07 08:51:44 crc kubenswrapper[4761]: I0307 08:51:44.899258 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-69d7d999d5-z6jzw" podUID="ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.141992 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547892-pgkbb"] Mar 07 08:52:00 crc kubenswrapper[4761]: E0307 08:52:00.146934 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12dbae5-26b3-47ac-8709-9d6609dabbdf" containerName="oc" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.146970 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12dbae5-26b3-47ac-8709-9d6609dabbdf" containerName="oc" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.147421 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c12dbae5-26b3-47ac-8709-9d6609dabbdf" containerName="oc" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.148484 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547892-pgkbb" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.150743 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.152239 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.152783 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.167467 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547892-pgkbb"] Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.244108 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks7nh\" (UniqueName: \"kubernetes.io/projected/a4994561-8589-40fb-92a6-20c78e23331b-kube-api-access-ks7nh\") pod \"auto-csr-approver-29547892-pgkbb\" (UID: \"a4994561-8589-40fb-92a6-20c78e23331b\") " pod="openshift-infra/auto-csr-approver-29547892-pgkbb" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.346905 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks7nh\" (UniqueName: \"kubernetes.io/projected/a4994561-8589-40fb-92a6-20c78e23331b-kube-api-access-ks7nh\") pod \"auto-csr-approver-29547892-pgkbb\" (UID: \"a4994561-8589-40fb-92a6-20c78e23331b\") " pod="openshift-infra/auto-csr-approver-29547892-pgkbb" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.365809 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks7nh\" (UniqueName: \"kubernetes.io/projected/a4994561-8589-40fb-92a6-20c78e23331b-kube-api-access-ks7nh\") pod \"auto-csr-approver-29547892-pgkbb\" (UID: \"a4994561-8589-40fb-92a6-20c78e23331b\") " pod="openshift-infra/auto-csr-approver-29547892-pgkbb" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.471003 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547892-pgkbb" Mar 07 08:52:01 crc kubenswrapper[4761]: I0307 08:52:01.052100 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547892-pgkbb"] Mar 07 08:52:01 crc kubenswrapper[4761]: I0307 08:52:01.722124 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547892-pgkbb" event={"ID":"a4994561-8589-40fb-92a6-20c78e23331b","Type":"ContainerStarted","Data":"1a42eb3fa57ab3854d815f3788a0485fac7ee3837ec7c2a0485d8b4eb6278b32"} Mar 07 08:52:02 crc kubenswrapper[4761]: I0307 08:52:02.743413 4761 generic.go:334] "Generic (PLEG): container finished" podID="a4994561-8589-40fb-92a6-20c78e23331b" containerID="16c463101ccc932f6798ea8b411f08bbb64c2a538904d22603e2d036ee3fbb54" exitCode=0 Mar 07 08:52:02 crc kubenswrapper[4761]: I0307 08:52:02.743667 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547892-pgkbb" event={"ID":"a4994561-8589-40fb-92a6-20c78e23331b","Type":"ContainerDied","Data":"16c463101ccc932f6798ea8b411f08bbb64c2a538904d22603e2d036ee3fbb54"} Mar 07 08:52:04 crc kubenswrapper[4761]: I0307 08:52:04.259872 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547892-pgkbb" Mar 07 08:52:04 crc kubenswrapper[4761]: I0307 08:52:04.356213 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks7nh\" (UniqueName: \"kubernetes.io/projected/a4994561-8589-40fb-92a6-20c78e23331b-kube-api-access-ks7nh\") pod \"a4994561-8589-40fb-92a6-20c78e23331b\" (UID: \"a4994561-8589-40fb-92a6-20c78e23331b\") " Mar 07 08:52:04 crc kubenswrapper[4761]: I0307 08:52:04.365801 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4994561-8589-40fb-92a6-20c78e23331b-kube-api-access-ks7nh" (OuterVolumeSpecName: "kube-api-access-ks7nh") pod "a4994561-8589-40fb-92a6-20c78e23331b" (UID: "a4994561-8589-40fb-92a6-20c78e23331b"). InnerVolumeSpecName "kube-api-access-ks7nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:52:04 crc kubenswrapper[4761]: I0307 08:52:04.459587 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks7nh\" (UniqueName: \"kubernetes.io/projected/a4994561-8589-40fb-92a6-20c78e23331b-kube-api-access-ks7nh\") on node \"crc\" DevicePath \"\"" Mar 07 08:52:04 crc kubenswrapper[4761]: I0307 08:52:04.770976 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547892-pgkbb" event={"ID":"a4994561-8589-40fb-92a6-20c78e23331b","Type":"ContainerDied","Data":"1a42eb3fa57ab3854d815f3788a0485fac7ee3837ec7c2a0485d8b4eb6278b32"} Mar 07 08:52:04 crc kubenswrapper[4761]: I0307 08:52:04.771259 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a42eb3fa57ab3854d815f3788a0485fac7ee3837ec7c2a0485d8b4eb6278b32" Mar 07 08:52:04 crc kubenswrapper[4761]: I0307 08:52:04.771062 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547892-pgkbb" Mar 07 08:52:05 crc kubenswrapper[4761]: I0307 08:52:05.331605 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547886-7h9n2"] Mar 07 08:52:05 crc kubenswrapper[4761]: I0307 08:52:05.342168 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547886-7h9n2"] Mar 07 08:52:05 crc kubenswrapper[4761]: I0307 08:52:05.719936 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea7d82cc-2886-4165-adeb-f5c619e985d3" path="/var/lib/kubelet/pods/ea7d82cc-2886-4165-adeb-f5c619e985d3/volumes" Mar 07 08:52:21 crc kubenswrapper[4761]: I0307 08:52:21.233254 4761 scope.go:117] "RemoveContainer" containerID="ac108cfd8711bdc247b736dad6e49c69e12c1588416ca0d497290394ba52eb0e" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.449817 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-srlct"] Mar 07 08:52:57 crc kubenswrapper[4761]: E0307 08:52:57.451188 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4994561-8589-40fb-92a6-20c78e23331b" containerName="oc" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.451206 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4994561-8589-40fb-92a6-20c78e23331b" containerName="oc" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.451470 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4994561-8589-40fb-92a6-20c78e23331b" containerName="oc" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.458294 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.479389 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-srlct"] Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.630518 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmcfl\" (UniqueName: \"kubernetes.io/projected/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-kube-api-access-hmcfl\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.630586 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-utilities\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.630787 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-catalog-content\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.732679 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-utilities\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.732879 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-catalog-content\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.733019 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmcfl\" (UniqueName: \"kubernetes.io/projected/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-kube-api-access-hmcfl\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.733652 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-catalog-content\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.733834 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-utilities\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.757697 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmcfl\" (UniqueName: \"kubernetes.io/projected/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-kube-api-access-hmcfl\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.785084 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:58 crc kubenswrapper[4761]: I0307 08:52:58.315935 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-srlct"] Mar 07 08:52:58 crc kubenswrapper[4761]: I0307 08:52:58.378814 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srlct" event={"ID":"8b4cf87a-65bc-43f1-8f30-85fb42aac74f","Type":"ContainerStarted","Data":"2d49b6b5ec020cd03dd09e68c6070b10681234c9bc8810f539f470cbc40f30c5"} Mar 07 08:52:58 crc kubenswrapper[4761]: E0307 08:52:58.830821 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b4cf87a_65bc_43f1_8f30_85fb42aac74f.slice/crio-conmon-fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b4cf87a_65bc_43f1_8f30_85fb42aac74f.slice/crio-fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916.scope\": RecentStats: unable to find data in memory cache]" Mar 07 08:52:59 crc kubenswrapper[4761]: I0307 08:52:59.391101 4761 generic.go:334] "Generic (PLEG): container finished" podID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerID="fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916" exitCode=0 Mar 07 08:52:59 crc kubenswrapper[4761]: I0307 08:52:59.391169 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srlct" event={"ID":"8b4cf87a-65bc-43f1-8f30-85fb42aac74f","Type":"ContainerDied","Data":"fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916"} Mar 07 08:52:59 crc kubenswrapper[4761]: I0307 08:52:59.394023 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:53:00 crc kubenswrapper[4761]: I0307 08:53:00.409243 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srlct" event={"ID":"8b4cf87a-65bc-43f1-8f30-85fb42aac74f","Type":"ContainerStarted","Data":"2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff"} Mar 07 08:53:01 crc kubenswrapper[4761]: I0307 08:53:01.422841 4761 generic.go:334] "Generic (PLEG): container finished" podID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerID="2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff" exitCode=0 Mar 07 08:53:01 crc kubenswrapper[4761]: I0307 08:53:01.423419 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srlct" event={"ID":"8b4cf87a-65bc-43f1-8f30-85fb42aac74f","Type":"ContainerDied","Data":"2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff"} Mar 07 08:53:02 crc kubenswrapper[4761]: I0307 08:53:02.434846 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srlct" event={"ID":"8b4cf87a-65bc-43f1-8f30-85fb42aac74f","Type":"ContainerStarted","Data":"ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8"} Mar 07 08:53:02 crc kubenswrapper[4761]: I0307 08:53:02.470447 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-srlct" podStartSLOduration=3.026588404 podStartE2EDuration="5.470425617s" podCreationTimestamp="2026-03-07 08:52:57 +0000 UTC" firstStartedPulling="2026-03-07 08:52:59.393681757 +0000 UTC m=+3836.302848242" lastFinishedPulling="2026-03-07 08:53:01.83751895 +0000 UTC m=+3838.746685455" observedRunningTime="2026-03-07 08:53:02.458238354 +0000 UTC m=+3839.367404839" watchObservedRunningTime="2026-03-07 08:53:02.470425617 +0000 UTC m=+3839.379592102" Mar 07 08:53:07 crc kubenswrapper[4761]: I0307 08:53:07.786056 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:53:07 crc kubenswrapper[4761]: I0307 08:53:07.786676 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:53:07 crc kubenswrapper[4761]: I0307 08:53:07.862533 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:53:08 crc kubenswrapper[4761]: I0307 08:53:08.544819 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:53:08 crc kubenswrapper[4761]: I0307 08:53:08.598779 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-srlct"] Mar 07 08:53:10 crc kubenswrapper[4761]: I0307 08:53:10.527053 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-srlct" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="registry-server" containerID="cri-o://ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8" gracePeriod=2 Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.284674 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.300934 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-catalog-content\") pod \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.301296 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-utilities\") pod \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.301446 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmcfl\" (UniqueName: \"kubernetes.io/projected/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-kube-api-access-hmcfl\") pod \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.302550 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-utilities" (OuterVolumeSpecName: "utilities") pod "8b4cf87a-65bc-43f1-8f30-85fb42aac74f" (UID: "8b4cf87a-65bc-43f1-8f30-85fb42aac74f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.314044 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-kube-api-access-hmcfl" (OuterVolumeSpecName: "kube-api-access-hmcfl") pod "8b4cf87a-65bc-43f1-8f30-85fb42aac74f" (UID: "8b4cf87a-65bc-43f1-8f30-85fb42aac74f"). InnerVolumeSpecName "kube-api-access-hmcfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.349355 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b4cf87a-65bc-43f1-8f30-85fb42aac74f" (UID: "8b4cf87a-65bc-43f1-8f30-85fb42aac74f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.403256 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmcfl\" (UniqueName: \"kubernetes.io/projected/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-kube-api-access-hmcfl\") on node \"crc\" DevicePath \"\"" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.403298 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.403312 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.539476 4761 generic.go:334] "Generic (PLEG): container finished" podID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerID="ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8" exitCode=0 Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.539520 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srlct" event={"ID":"8b4cf87a-65bc-43f1-8f30-85fb42aac74f","Type":"ContainerDied","Data":"ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8"} Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.539549 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srlct" event={"ID":"8b4cf87a-65bc-43f1-8f30-85fb42aac74f","Type":"ContainerDied","Data":"2d49b6b5ec020cd03dd09e68c6070b10681234c9bc8810f539f470cbc40f30c5"} Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.539569 4761 scope.go:117] "RemoveContainer" containerID="ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.539791 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.586963 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-srlct"] Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.587523 4761 scope.go:117] "RemoveContainer" containerID="2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.603416 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-srlct"] Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.613313 4761 scope.go:117] "RemoveContainer" containerID="fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.671832 4761 scope.go:117] "RemoveContainer" containerID="ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8" Mar 07 08:53:11 crc kubenswrapper[4761]: E0307 08:53:11.673029 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8\": container with ID starting with ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8 not found: ID does not exist" containerID="ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.673086 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8"} err="failed to get container status \"ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8\": rpc error: code = NotFound desc = could not find container \"ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8\": container with ID starting with ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8 not found: ID does not exist" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.673119 4761 scope.go:117] "RemoveContainer" containerID="2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff" Mar 07 08:53:11 crc kubenswrapper[4761]: E0307 08:53:11.673647 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff\": container with ID starting with 2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff not found: ID does not exist" containerID="2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.673670 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff"} err="failed to get container status \"2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff\": rpc error: code = NotFound desc = could not find container \"2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff\": container with ID starting with 2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff not found: ID does not exist" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.673683 4761 scope.go:117] "RemoveContainer" containerID="fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916" Mar 07 08:53:11 crc kubenswrapper[4761]: E0307 08:53:11.674248 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916\": container with ID starting with fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916 not found: ID does not exist" containerID="fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.674300 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916"} err="failed to get container status \"fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916\": rpc error: code = NotFound desc = could not find container \"fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916\": container with ID starting with fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916 not found: ID does not exist" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.717160 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" path="/var/lib/kubelet/pods/8b4cf87a-65bc-43f1-8f30-85fb42aac74f/volumes" Mar 07 08:53:43 crc kubenswrapper[4761]: I0307 08:53:43.769016 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:53:43 crc kubenswrapper[4761]: I0307 08:53:43.770575 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.155917 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547894-qfvdj"] Mar 07 08:54:00 crc kubenswrapper[4761]: E0307 08:54:00.157090 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="extract-content" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.157106 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="extract-content" Mar 07 08:54:00 crc kubenswrapper[4761]: E0307 08:54:00.157118 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="extract-utilities" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.157126 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="extract-utilities" Mar 07 08:54:00 crc kubenswrapper[4761]: E0307 08:54:00.157160 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="registry-server" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.157168 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="registry-server" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.157475 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="registry-server" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.158574 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547894-qfvdj" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.164032 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.164307 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.165828 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.173321 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547894-qfvdj"] Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.287954 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n56rs\" (UniqueName: \"kubernetes.io/projected/a11aad48-1955-49bc-8682-f74ea9d9b3c7-kube-api-access-n56rs\") pod \"auto-csr-approver-29547894-qfvdj\" (UID: \"a11aad48-1955-49bc-8682-f74ea9d9b3c7\") " pod="openshift-infra/auto-csr-approver-29547894-qfvdj" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.390113 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n56rs\" (UniqueName: \"kubernetes.io/projected/a11aad48-1955-49bc-8682-f74ea9d9b3c7-kube-api-access-n56rs\") pod \"auto-csr-approver-29547894-qfvdj\" (UID: \"a11aad48-1955-49bc-8682-f74ea9d9b3c7\") " pod="openshift-infra/auto-csr-approver-29547894-qfvdj" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.411544 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n56rs\" (UniqueName: \"kubernetes.io/projected/a11aad48-1955-49bc-8682-f74ea9d9b3c7-kube-api-access-n56rs\") pod \"auto-csr-approver-29547894-qfvdj\" (UID: \"a11aad48-1955-49bc-8682-f74ea9d9b3c7\") " pod="openshift-infra/auto-csr-approver-29547894-qfvdj" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.489542 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547894-qfvdj" Mar 07 08:54:01 crc kubenswrapper[4761]: I0307 08:54:01.009912 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547894-qfvdj"] Mar 07 08:54:01 crc kubenswrapper[4761]: I0307 08:54:01.099099 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547894-qfvdj" event={"ID":"a11aad48-1955-49bc-8682-f74ea9d9b3c7","Type":"ContainerStarted","Data":"cff9825863d8e941837ab6aa807e192ae55440da70397b5f2cb62a157a8f4544"} Mar 07 08:54:03 crc kubenswrapper[4761]: I0307 08:54:03.137853 4761 generic.go:334] "Generic (PLEG): container finished" podID="a11aad48-1955-49bc-8682-f74ea9d9b3c7" containerID="badfe9d91efd9e075476e7823eeaba9a665c8e9932f391bf56cc70889344449f" exitCode=0 Mar 07 08:54:03 crc kubenswrapper[4761]: I0307 08:54:03.138377 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547894-qfvdj" event={"ID":"a11aad48-1955-49bc-8682-f74ea9d9b3c7","Type":"ContainerDied","Data":"badfe9d91efd9e075476e7823eeaba9a665c8e9932f391bf56cc70889344449f"} Mar 07 08:54:04 crc kubenswrapper[4761]: I0307 08:54:04.615231 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547894-qfvdj" Mar 07 08:54:04 crc kubenswrapper[4761]: I0307 08:54:04.712120 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n56rs\" (UniqueName: \"kubernetes.io/projected/a11aad48-1955-49bc-8682-f74ea9d9b3c7-kube-api-access-n56rs\") pod \"a11aad48-1955-49bc-8682-f74ea9d9b3c7\" (UID: \"a11aad48-1955-49bc-8682-f74ea9d9b3c7\") " Mar 07 08:54:04 crc kubenswrapper[4761]: I0307 08:54:04.725525 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a11aad48-1955-49bc-8682-f74ea9d9b3c7-kube-api-access-n56rs" (OuterVolumeSpecName: "kube-api-access-n56rs") pod "a11aad48-1955-49bc-8682-f74ea9d9b3c7" (UID: "a11aad48-1955-49bc-8682-f74ea9d9b3c7"). InnerVolumeSpecName "kube-api-access-n56rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:54:04 crc kubenswrapper[4761]: I0307 08:54:04.818008 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n56rs\" (UniqueName: \"kubernetes.io/projected/a11aad48-1955-49bc-8682-f74ea9d9b3c7-kube-api-access-n56rs\") on node \"crc\" DevicePath \"\"" Mar 07 08:54:05 crc kubenswrapper[4761]: I0307 08:54:05.175868 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547894-qfvdj" event={"ID":"a11aad48-1955-49bc-8682-f74ea9d9b3c7","Type":"ContainerDied","Data":"cff9825863d8e941837ab6aa807e192ae55440da70397b5f2cb62a157a8f4544"} Mar 07 08:54:05 crc kubenswrapper[4761]: I0307 08:54:05.175914 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cff9825863d8e941837ab6aa807e192ae55440da70397b5f2cb62a157a8f4544" Mar 07 08:54:05 crc kubenswrapper[4761]: I0307 08:54:05.175971 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547894-qfvdj" Mar 07 08:54:05 crc kubenswrapper[4761]: I0307 08:54:05.721473 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547888-8x864"] Mar 07 08:54:05 crc kubenswrapper[4761]: I0307 08:54:05.728445 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547888-8x864"] Mar 07 08:54:07 crc kubenswrapper[4761]: I0307 08:54:07.724913 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f" path="/var/lib/kubelet/pods/d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f/volumes" Mar 07 08:54:13 crc kubenswrapper[4761]: I0307 08:54:13.770259 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:54:13 crc kubenswrapper[4761]: I0307 08:54:13.772308 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:54:21 crc kubenswrapper[4761]: I0307 08:54:21.402147 4761 scope.go:117] "RemoveContainer" containerID="cda9fe944526e544b8a82b62eb8056d564d5c6bed8acb7abdac1932157cdb6c6" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.179911 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mxxtv"] Mar 07 08:54:42 crc kubenswrapper[4761]: E0307 08:54:42.181192 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11aad48-1955-49bc-8682-f74ea9d9b3c7" containerName="oc" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.181211 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11aad48-1955-49bc-8682-f74ea9d9b3c7" containerName="oc" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.181562 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a11aad48-1955-49bc-8682-f74ea9d9b3c7" containerName="oc" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.183676 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.198250 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mxxtv"] Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.224699 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-catalog-content\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.225184 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-utilities\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.225247 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd7n5\" (UniqueName: \"kubernetes.io/projected/46228c21-1dcc-4f70-9278-566b64c0b057-kube-api-access-hd7n5\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.328027 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-utilities\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.328082 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd7n5\" (UniqueName: \"kubernetes.io/projected/46228c21-1dcc-4f70-9278-566b64c0b057-kube-api-access-hd7n5\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.328251 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-catalog-content\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.328602 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-utilities\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.328637 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-catalog-content\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.353262 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd7n5\" (UniqueName: \"kubernetes.io/projected/46228c21-1dcc-4f70-9278-566b64c0b057-kube-api-access-hd7n5\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.508848 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.046189 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mxxtv"] Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.655864 4761 generic.go:334] "Generic (PLEG): container finished" podID="46228c21-1dcc-4f70-9278-566b64c0b057" containerID="8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c" exitCode=0 Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.656754 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxxtv" event={"ID":"46228c21-1dcc-4f70-9278-566b64c0b057","Type":"ContainerDied","Data":"8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c"} Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.657824 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxxtv" event={"ID":"46228c21-1dcc-4f70-9278-566b64c0b057","Type":"ContainerStarted","Data":"7c28e43324e51de33cf3e60375e69068ae74b8153d1eec921281646e1d95992e"} Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.768834 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.769099 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.774543 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.775413 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.775482 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" gracePeriod=600 Mar 07 08:54:43 crc kubenswrapper[4761]: E0307 08:54:43.885103 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f2ca598_c5ae_4f45_bb7a_812b75562203.slice/crio-06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803.scope\": RecentStats: unable to find data in memory cache]" Mar 07 08:54:43 crc kubenswrapper[4761]: E0307 08:54:43.896104 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:54:44 crc kubenswrapper[4761]: I0307 08:54:44.679750 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803"} Mar 07 08:54:44 crc kubenswrapper[4761]: I0307 08:54:44.679674 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" exitCode=0 Mar 07 08:54:44 crc kubenswrapper[4761]: I0307 08:54:44.680246 4761 scope.go:117] "RemoveContainer" containerID="ef65b2e950cadd2d0ba24302769cb6d9be0b6d35f82ee72399a22260360a7b43" Mar 07 08:54:44 crc kubenswrapper[4761]: I0307 08:54:44.681657 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:54:44 crc kubenswrapper[4761]: E0307 08:54:44.682091 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:54:45 crc kubenswrapper[4761]: I0307 08:54:45.697981 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxxtv" event={"ID":"46228c21-1dcc-4f70-9278-566b64c0b057","Type":"ContainerStarted","Data":"9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b"} Mar 07 08:54:47 crc kubenswrapper[4761]: I0307 08:54:47.730296 4761 generic.go:334] "Generic (PLEG): container finished" podID="46228c21-1dcc-4f70-9278-566b64c0b057" containerID="9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b" exitCode=0 Mar 07 08:54:47 crc kubenswrapper[4761]: I0307 08:54:47.730858 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxxtv" event={"ID":"46228c21-1dcc-4f70-9278-566b64c0b057","Type":"ContainerDied","Data":"9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b"} Mar 07 08:54:48 crc kubenswrapper[4761]: I0307 08:54:48.742303 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxxtv" event={"ID":"46228c21-1dcc-4f70-9278-566b64c0b057","Type":"ContainerStarted","Data":"db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994"} Mar 07 08:54:48 crc kubenswrapper[4761]: I0307 08:54:48.764410 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mxxtv" podStartSLOduration=1.940555246 podStartE2EDuration="6.764389445s" podCreationTimestamp="2026-03-07 08:54:42 +0000 UTC" firstStartedPulling="2026-03-07 08:54:43.659007594 +0000 UTC m=+3940.568174079" lastFinishedPulling="2026-03-07 08:54:48.482841803 +0000 UTC m=+3945.392008278" observedRunningTime="2026-03-07 08:54:48.764355425 +0000 UTC m=+3945.673521920" watchObservedRunningTime="2026-03-07 08:54:48.764389445 +0000 UTC m=+3945.673555920" Mar 07 08:54:52 crc kubenswrapper[4761]: I0307 08:54:52.509308 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:52 crc kubenswrapper[4761]: I0307 08:54:52.509784 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:53 crc kubenswrapper[4761]: I0307 08:54:53.562148 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mxxtv" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="registry-server" probeResult="failure" output=< Mar 07 08:54:53 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:54:53 crc kubenswrapper[4761]: > Mar 07 08:54:57 crc kubenswrapper[4761]: I0307 08:54:57.707070 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:54:57 crc kubenswrapper[4761]: E0307 08:54:57.708971 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:54:59 crc kubenswrapper[4761]: I0307 08:54:59.968410 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7wqvp"] Mar 07 08:54:59 crc kubenswrapper[4761]: I0307 08:54:59.972633 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:54:59.999884 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wqvp"] Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.091795 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-utilities\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.091905 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk7fn\" (UniqueName: \"kubernetes.io/projected/c52429fa-d918-4b0d-b436-4643abfc9556-kube-api-access-fk7fn\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.092137 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-catalog-content\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.194218 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-utilities\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.194288 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk7fn\" (UniqueName: \"kubernetes.io/projected/c52429fa-d918-4b0d-b436-4643abfc9556-kube-api-access-fk7fn\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.194393 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-catalog-content\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.195212 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-catalog-content\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.195366 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-utilities\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.228505 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk7fn\" (UniqueName: \"kubernetes.io/projected/c52429fa-d918-4b0d-b436-4643abfc9556-kube-api-access-fk7fn\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.305398 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.854676 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wqvp"] Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.878397 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wqvp" event={"ID":"c52429fa-d918-4b0d-b436-4643abfc9556","Type":"ContainerStarted","Data":"2e1ede8e44380b4c4fa0f758a60eed459b6622fe2982ef8ea2e3300f5092f072"} Mar 07 08:55:01 crc kubenswrapper[4761]: I0307 08:55:01.891567 4761 generic.go:334] "Generic (PLEG): container finished" podID="c52429fa-d918-4b0d-b436-4643abfc9556" containerID="7934f6cf3b280d4efaa4fff0a6702b8318dd7cb315dc8a816c1d267ff4b00f8e" exitCode=0 Mar 07 08:55:01 crc kubenswrapper[4761]: I0307 08:55:01.891660 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wqvp" event={"ID":"c52429fa-d918-4b0d-b436-4643abfc9556","Type":"ContainerDied","Data":"7934f6cf3b280d4efaa4fff0a6702b8318dd7cb315dc8a816c1d267ff4b00f8e"} Mar 07 08:55:02 crc kubenswrapper[4761]: I0307 08:55:02.585668 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:55:02 crc kubenswrapper[4761]: I0307 08:55:02.669358 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:55:02 crc kubenswrapper[4761]: I0307 08:55:02.918381 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wqvp" event={"ID":"c52429fa-d918-4b0d-b436-4643abfc9556","Type":"ContainerStarted","Data":"5e98e0b48967f3012a7d7b247db9f18083477211cd0d3736f466f99dd63dab6a"} Mar 07 08:55:04 crc kubenswrapper[4761]: I0307 08:55:04.931060 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mxxtv"] Mar 07 08:55:04 crc kubenswrapper[4761]: I0307 08:55:04.932006 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mxxtv" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="registry-server" containerID="cri-o://db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994" gracePeriod=2 Mar 07 08:55:04 crc kubenswrapper[4761]: I0307 08:55:04.947709 4761 generic.go:334] "Generic (PLEG): container finished" podID="c52429fa-d918-4b0d-b436-4643abfc9556" containerID="5e98e0b48967f3012a7d7b247db9f18083477211cd0d3736f466f99dd63dab6a" exitCode=0 Mar 07 08:55:04 crc kubenswrapper[4761]: I0307 08:55:04.947790 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wqvp" event={"ID":"c52429fa-d918-4b0d-b436-4643abfc9556","Type":"ContainerDied","Data":"5e98e0b48967f3012a7d7b247db9f18083477211cd0d3736f466f99dd63dab6a"} Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.551637 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.637384 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-utilities\") pod \"46228c21-1dcc-4f70-9278-566b64c0b057\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.637646 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-catalog-content\") pod \"46228c21-1dcc-4f70-9278-566b64c0b057\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.637704 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd7n5\" (UniqueName: \"kubernetes.io/projected/46228c21-1dcc-4f70-9278-566b64c0b057-kube-api-access-hd7n5\") pod \"46228c21-1dcc-4f70-9278-566b64c0b057\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.639537 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-utilities" (OuterVolumeSpecName: "utilities") pod "46228c21-1dcc-4f70-9278-566b64c0b057" (UID: "46228c21-1dcc-4f70-9278-566b64c0b057"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.655680 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46228c21-1dcc-4f70-9278-566b64c0b057-kube-api-access-hd7n5" (OuterVolumeSpecName: "kube-api-access-hd7n5") pod "46228c21-1dcc-4f70-9278-566b64c0b057" (UID: "46228c21-1dcc-4f70-9278-566b64c0b057"). InnerVolumeSpecName "kube-api-access-hd7n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.722828 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46228c21-1dcc-4f70-9278-566b64c0b057" (UID: "46228c21-1dcc-4f70-9278-566b64c0b057"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.741684 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.741755 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.741781 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd7n5\" (UniqueName: \"kubernetes.io/projected/46228c21-1dcc-4f70-9278-566b64c0b057-kube-api-access-hd7n5\") on node \"crc\" DevicePath \"\"" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.962793 4761 generic.go:334] "Generic (PLEG): container finished" podID="46228c21-1dcc-4f70-9278-566b64c0b057" containerID="db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994" exitCode=0 Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.962858 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxxtv" event={"ID":"46228c21-1dcc-4f70-9278-566b64c0b057","Type":"ContainerDied","Data":"db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994"} Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.962883 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxxtv" event={"ID":"46228c21-1dcc-4f70-9278-566b64c0b057","Type":"ContainerDied","Data":"7c28e43324e51de33cf3e60375e69068ae74b8153d1eec921281646e1d95992e"} Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.962900 4761 scope.go:117] "RemoveContainer" containerID="db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.963899 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.968740 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wqvp" event={"ID":"c52429fa-d918-4b0d-b436-4643abfc9556","Type":"ContainerStarted","Data":"8a8e3d2e6b120dc217d6a6a19cf985af10d62ef876fb679a3bb850497f86db2a"} Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.993320 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7wqvp" podStartSLOduration=3.527366896 podStartE2EDuration="6.993299911s" podCreationTimestamp="2026-03-07 08:54:59 +0000 UTC" firstStartedPulling="2026-03-07 08:55:01.894361595 +0000 UTC m=+3958.803528080" lastFinishedPulling="2026-03-07 08:55:05.36029462 +0000 UTC m=+3962.269461095" observedRunningTime="2026-03-07 08:55:05.99003694 +0000 UTC m=+3962.899203405" watchObservedRunningTime="2026-03-07 08:55:05.993299911 +0000 UTC m=+3962.902466386" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.996503 4761 scope.go:117] "RemoveContainer" containerID="9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b" Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.026051 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mxxtv"] Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.029226 4761 scope.go:117] "RemoveContainer" containerID="8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c" Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.049599 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mxxtv"] Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.090230 4761 scope.go:117] "RemoveContainer" containerID="db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994" Mar 07 08:55:06 crc kubenswrapper[4761]: E0307 08:55:06.090984 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994\": container with ID starting with db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994 not found: ID does not exist" containerID="db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994" Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.091024 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994"} err="failed to get container status \"db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994\": rpc error: code = NotFound desc = could not find container \"db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994\": container with ID starting with db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994 not found: ID does not exist" Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.091049 4761 scope.go:117] "RemoveContainer" containerID="9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b" Mar 07 08:55:06 crc kubenswrapper[4761]: E0307 08:55:06.091901 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b\": container with ID starting with 9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b not found: ID does not exist" containerID="9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b" Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.091950 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b"} err="failed to get container status \"9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b\": rpc error: code = NotFound desc = could not find container \"9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b\": container with ID starting with 9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b not found: ID does not exist" Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.091982 4761 scope.go:117] "RemoveContainer" containerID="8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c" Mar 07 08:55:06 crc kubenswrapper[4761]: E0307 08:55:06.092326 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c\": container with ID starting with 8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c not found: ID does not exist" containerID="8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c" Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.092353 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c"} err="failed to get container status \"8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c\": rpc error: code = NotFound desc = could not find container \"8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c\": container with ID starting with 8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c not found: ID does not exist" Mar 07 08:55:07 crc kubenswrapper[4761]: I0307 08:55:07.726048 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" path="/var/lib/kubelet/pods/46228c21-1dcc-4f70-9278-566b64c0b057/volumes" Mar 07 08:55:10 crc kubenswrapper[4761]: I0307 08:55:10.306557 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:10 crc kubenswrapper[4761]: I0307 08:55:10.307085 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:10 crc kubenswrapper[4761]: I0307 08:55:10.371861 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:11 crc kubenswrapper[4761]: I0307 08:55:11.142236 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:11 crc kubenswrapper[4761]: I0307 08:55:11.526177 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wqvp"] Mar 07 08:55:11 crc kubenswrapper[4761]: I0307 08:55:11.706820 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:55:11 crc kubenswrapper[4761]: E0307 08:55:11.707321 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:55:13 crc kubenswrapper[4761]: I0307 08:55:13.091074 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7wqvp" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="registry-server" containerID="cri-o://8a8e3d2e6b120dc217d6a6a19cf985af10d62ef876fb679a3bb850497f86db2a" gracePeriod=2 Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.107674 4761 generic.go:334] "Generic (PLEG): container finished" podID="c52429fa-d918-4b0d-b436-4643abfc9556" containerID="8a8e3d2e6b120dc217d6a6a19cf985af10d62ef876fb679a3bb850497f86db2a" exitCode=0 Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.107776 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wqvp" event={"ID":"c52429fa-d918-4b0d-b436-4643abfc9556","Type":"ContainerDied","Data":"8a8e3d2e6b120dc217d6a6a19cf985af10d62ef876fb679a3bb850497f86db2a"} Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.886299 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.980459 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-utilities\") pod \"c52429fa-d918-4b0d-b436-4643abfc9556\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.980885 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk7fn\" (UniqueName: \"kubernetes.io/projected/c52429fa-d918-4b0d-b436-4643abfc9556-kube-api-access-fk7fn\") pod \"c52429fa-d918-4b0d-b436-4643abfc9556\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.980937 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-catalog-content\") pod \"c52429fa-d918-4b0d-b436-4643abfc9556\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.983092 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-utilities" (OuterVolumeSpecName: "utilities") pod "c52429fa-d918-4b0d-b436-4643abfc9556" (UID: "c52429fa-d918-4b0d-b436-4643abfc9556"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.989007 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c52429fa-d918-4b0d-b436-4643abfc9556-kube-api-access-fk7fn" (OuterVolumeSpecName: "kube-api-access-fk7fn") pod "c52429fa-d918-4b0d-b436-4643abfc9556" (UID: "c52429fa-d918-4b0d-b436-4643abfc9556"). InnerVolumeSpecName "kube-api-access-fk7fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.052476 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c52429fa-d918-4b0d-b436-4643abfc9556" (UID: "c52429fa-d918-4b0d-b436-4643abfc9556"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.083526 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk7fn\" (UniqueName: \"kubernetes.io/projected/c52429fa-d918-4b0d-b436-4643abfc9556-kube-api-access-fk7fn\") on node \"crc\" DevicePath \"\"" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.083565 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.083575 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.120861 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wqvp" event={"ID":"c52429fa-d918-4b0d-b436-4643abfc9556","Type":"ContainerDied","Data":"2e1ede8e44380b4c4fa0f758a60eed459b6622fe2982ef8ea2e3300f5092f072"} Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.120934 4761 scope.go:117] "RemoveContainer" containerID="8a8e3d2e6b120dc217d6a6a19cf985af10d62ef876fb679a3bb850497f86db2a" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.122181 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.147971 4761 scope.go:117] "RemoveContainer" containerID="5e98e0b48967f3012a7d7b247db9f18083477211cd0d3736f466f99dd63dab6a" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.198476 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wqvp"] Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.211321 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7wqvp"] Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.230701 4761 scope.go:117] "RemoveContainer" containerID="7934f6cf3b280d4efaa4fff0a6702b8318dd7cb315dc8a816c1d267ff4b00f8e" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.722134 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" path="/var/lib/kubelet/pods/c52429fa-d918-4b0d-b436-4643abfc9556/volumes" Mar 07 08:55:25 crc kubenswrapper[4761]: I0307 08:55:25.706515 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:55:25 crc kubenswrapper[4761]: E0307 08:55:25.709885 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:55:38 crc kubenswrapper[4761]: I0307 08:55:38.707280 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:55:38 crc kubenswrapper[4761]: E0307 08:55:38.708353 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:55:50 crc kubenswrapper[4761]: I0307 08:55:50.706643 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:55:50 crc kubenswrapper[4761]: E0307 08:55:50.707852 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.163412 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547896-k5cdv"] Mar 07 08:56:00 crc kubenswrapper[4761]: E0307 08:56:00.164776 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="extract-content" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.164800 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="extract-content" Mar 07 08:56:00 crc kubenswrapper[4761]: E0307 08:56:00.164827 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="registry-server" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.164836 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="registry-server" Mar 07 08:56:00 crc kubenswrapper[4761]: E0307 08:56:00.164853 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="extract-utilities" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.164862 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="extract-utilities" Mar 07 08:56:00 crc kubenswrapper[4761]: E0307 08:56:00.164895 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="extract-utilities" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.164904 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="extract-utilities" Mar 07 08:56:00 crc kubenswrapper[4761]: E0307 08:56:00.164939 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="extract-content" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.164947 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="extract-content" Mar 07 08:56:00 crc kubenswrapper[4761]: E0307 08:56:00.164972 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="registry-server" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.164980 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="registry-server" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.165271 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="registry-server" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.165303 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="registry-server" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.167545 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.175705 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547896-k5cdv"] Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.178316 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.178560 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.178828 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.341571 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29qpj\" (UniqueName: \"kubernetes.io/projected/9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b-kube-api-access-29qpj\") pod \"auto-csr-approver-29547896-k5cdv\" (UID: \"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b\") " pod="openshift-infra/auto-csr-approver-29547896-k5cdv" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.445593 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29qpj\" (UniqueName: \"kubernetes.io/projected/9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b-kube-api-access-29qpj\") pod \"auto-csr-approver-29547896-k5cdv\" (UID: \"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b\") " pod="openshift-infra/auto-csr-approver-29547896-k5cdv" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.464153 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29qpj\" (UniqueName: \"kubernetes.io/projected/9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b-kube-api-access-29qpj\") pod \"auto-csr-approver-29547896-k5cdv\" (UID: \"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b\") " pod="openshift-infra/auto-csr-approver-29547896-k5cdv" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.496760 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.967176 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547896-k5cdv"] Mar 07 08:56:01 crc kubenswrapper[4761]: I0307 08:56:01.730395 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" event={"ID":"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b","Type":"ContainerStarted","Data":"7176b8077d7a54571d69e49fb6102cd980dfe6633cd8e62626e6ab7ee865df5a"} Mar 07 08:56:02 crc kubenswrapper[4761]: I0307 08:56:02.729496 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" event={"ID":"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b","Type":"ContainerStarted","Data":"a3416b57302e9385064dac8130cfb79a1b591b5d67e31cc81d62ed5c3454a4fe"} Mar 07 08:56:02 crc kubenswrapper[4761]: I0307 08:56:02.752380 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" podStartSLOduration=1.804678674 podStartE2EDuration="2.752364149s" podCreationTimestamp="2026-03-07 08:56:00 +0000 UTC" firstStartedPulling="2026-03-07 08:56:00.972903956 +0000 UTC m=+4017.882070431" lastFinishedPulling="2026-03-07 08:56:01.920589431 +0000 UTC m=+4018.829755906" observedRunningTime="2026-03-07 08:56:02.744761471 +0000 UTC m=+4019.653927946" watchObservedRunningTime="2026-03-07 08:56:02.752364149 +0000 UTC m=+4019.661530624" Mar 07 08:56:03 crc kubenswrapper[4761]: I0307 08:56:03.751297 4761 generic.go:334] "Generic (PLEG): container finished" podID="9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b" containerID="a3416b57302e9385064dac8130cfb79a1b591b5d67e31cc81d62ed5c3454a4fe" exitCode=0 Mar 07 08:56:03 crc kubenswrapper[4761]: I0307 08:56:03.751706 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" event={"ID":"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b","Type":"ContainerDied","Data":"a3416b57302e9385064dac8130cfb79a1b591b5d67e31cc81d62ed5c3454a4fe"} Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.328005 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.468333 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29qpj\" (UniqueName: \"kubernetes.io/projected/9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b-kube-api-access-29qpj\") pod \"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b\" (UID: \"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b\") " Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.475824 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b-kube-api-access-29qpj" (OuterVolumeSpecName: "kube-api-access-29qpj") pod "9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b" (UID: "9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b"). InnerVolumeSpecName "kube-api-access-29qpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.571243 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29qpj\" (UniqueName: \"kubernetes.io/projected/9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b-kube-api-access-29qpj\") on node \"crc\" DevicePath \"\"" Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.709874 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:56:05 crc kubenswrapper[4761]: E0307 08:56:05.710360 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.781750 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" event={"ID":"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b","Type":"ContainerDied","Data":"7176b8077d7a54571d69e49fb6102cd980dfe6633cd8e62626e6ab7ee865df5a"} Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.781814 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7176b8077d7a54571d69e49fb6102cd980dfe6633cd8e62626e6ab7ee865df5a" Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.781983 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.829310 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547890-4n5w5"] Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.844448 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547890-4n5w5"] Mar 07 08:56:07 crc kubenswrapper[4761]: I0307 08:56:07.720805 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c12dbae5-26b3-47ac-8709-9d6609dabbdf" path="/var/lib/kubelet/pods/c12dbae5-26b3-47ac-8709-9d6609dabbdf/volumes" Mar 07 08:56:17 crc kubenswrapper[4761]: I0307 08:56:17.708110 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:56:17 crc kubenswrapper[4761]: E0307 08:56:17.709315 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:56:21 crc kubenswrapper[4761]: I0307 08:56:21.547388 4761 scope.go:117] "RemoveContainer" containerID="33342531c4e730445c577ee05f6170f96c2fe0e2e49bbda868bde245eb6c34f9" Mar 07 08:56:28 crc kubenswrapper[4761]: I0307 08:56:28.706551 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:56:28 crc kubenswrapper[4761]: E0307 08:56:28.707456 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:56:42 crc kubenswrapper[4761]: I0307 08:56:42.705945 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:56:42 crc kubenswrapper[4761]: E0307 08:56:42.706920 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:56:53 crc kubenswrapper[4761]: I0307 08:56:53.716382 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:56:53 crc kubenswrapper[4761]: E0307 08:56:53.717320 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:57:04 crc kubenswrapper[4761]: I0307 08:57:04.706023 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:57:04 crc kubenswrapper[4761]: E0307 08:57:04.706802 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:57:16 crc kubenswrapper[4761]: I0307 08:57:16.705487 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:57:16 crc kubenswrapper[4761]: E0307 08:57:16.706392 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:57:29 crc kubenswrapper[4761]: I0307 08:57:29.706974 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:57:29 crc kubenswrapper[4761]: E0307 08:57:29.708213 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:57:44 crc kubenswrapper[4761]: I0307 08:57:44.705937 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:57:44 crc kubenswrapper[4761]: E0307 08:57:44.706678 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:57:58 crc kubenswrapper[4761]: I0307 08:57:58.705817 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:57:58 crc kubenswrapper[4761]: E0307 08:57:58.707010 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.149603 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547898-vn4zd"] Mar 07 08:58:00 crc kubenswrapper[4761]: E0307 08:58:00.150850 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b" containerName="oc" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.150867 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b" containerName="oc" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.151138 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b" containerName="oc" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.152107 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.154934 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.156416 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.164862 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.196778 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qhsc\" (UniqueName: \"kubernetes.io/projected/99bad14e-cb05-46f6-90d5-2386ee98f2f8-kube-api-access-5qhsc\") pod \"auto-csr-approver-29547898-vn4zd\" (UID: \"99bad14e-cb05-46f6-90d5-2386ee98f2f8\") " pod="openshift-infra/auto-csr-approver-29547898-vn4zd" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.220963 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547898-vn4zd"] Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.298364 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qhsc\" (UniqueName: \"kubernetes.io/projected/99bad14e-cb05-46f6-90d5-2386ee98f2f8-kube-api-access-5qhsc\") pod \"auto-csr-approver-29547898-vn4zd\" (UID: \"99bad14e-cb05-46f6-90d5-2386ee98f2f8\") " pod="openshift-infra/auto-csr-approver-29547898-vn4zd" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.319521 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qhsc\" (UniqueName: \"kubernetes.io/projected/99bad14e-cb05-46f6-90d5-2386ee98f2f8-kube-api-access-5qhsc\") pod \"auto-csr-approver-29547898-vn4zd\" (UID: \"99bad14e-cb05-46f6-90d5-2386ee98f2f8\") " pod="openshift-infra/auto-csr-approver-29547898-vn4zd" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.475978 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.983409 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547898-vn4zd"] Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.985244 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:58:01 crc kubenswrapper[4761]: I0307 08:58:01.134884 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" event={"ID":"99bad14e-cb05-46f6-90d5-2386ee98f2f8","Type":"ContainerStarted","Data":"4007fb7a319e145298b38f7d45734c1313f24389cfb76d70cc7323d8169e527c"} Mar 07 08:58:02 crc kubenswrapper[4761]: I0307 08:58:02.163167 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" event={"ID":"99bad14e-cb05-46f6-90d5-2386ee98f2f8","Type":"ContainerStarted","Data":"b4d647aca9c63bfa93e553e6736bab3284efe222b6b79d764f3826cf7b8a38e6"} Mar 07 08:58:03 crc kubenswrapper[4761]: I0307 08:58:03.173044 4761 generic.go:334] "Generic (PLEG): container finished" podID="99bad14e-cb05-46f6-90d5-2386ee98f2f8" containerID="b4d647aca9c63bfa93e553e6736bab3284efe222b6b79d764f3826cf7b8a38e6" exitCode=0 Mar 07 08:58:03 crc kubenswrapper[4761]: I0307 08:58:03.173330 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" event={"ID":"99bad14e-cb05-46f6-90d5-2386ee98f2f8","Type":"ContainerDied","Data":"b4d647aca9c63bfa93e553e6736bab3284efe222b6b79d764f3826cf7b8a38e6"} Mar 07 08:58:04 crc kubenswrapper[4761]: I0307 08:58:04.647335 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" Mar 07 08:58:04 crc kubenswrapper[4761]: I0307 08:58:04.817606 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qhsc\" (UniqueName: \"kubernetes.io/projected/99bad14e-cb05-46f6-90d5-2386ee98f2f8-kube-api-access-5qhsc\") pod \"99bad14e-cb05-46f6-90d5-2386ee98f2f8\" (UID: \"99bad14e-cb05-46f6-90d5-2386ee98f2f8\") " Mar 07 08:58:04 crc kubenswrapper[4761]: I0307 08:58:04.823988 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99bad14e-cb05-46f6-90d5-2386ee98f2f8-kube-api-access-5qhsc" (OuterVolumeSpecName: "kube-api-access-5qhsc") pod "99bad14e-cb05-46f6-90d5-2386ee98f2f8" (UID: "99bad14e-cb05-46f6-90d5-2386ee98f2f8"). InnerVolumeSpecName "kube-api-access-5qhsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:58:04 crc kubenswrapper[4761]: I0307 08:58:04.921275 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qhsc\" (UniqueName: \"kubernetes.io/projected/99bad14e-cb05-46f6-90d5-2386ee98f2f8-kube-api-access-5qhsc\") on node \"crc\" DevicePath \"\"" Mar 07 08:58:05 crc kubenswrapper[4761]: I0307 08:58:05.201121 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" event={"ID":"99bad14e-cb05-46f6-90d5-2386ee98f2f8","Type":"ContainerDied","Data":"4007fb7a319e145298b38f7d45734c1313f24389cfb76d70cc7323d8169e527c"} Mar 07 08:58:05 crc kubenswrapper[4761]: I0307 08:58:05.201443 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4007fb7a319e145298b38f7d45734c1313f24389cfb76d70cc7323d8169e527c" Mar 07 08:58:05 crc kubenswrapper[4761]: I0307 08:58:05.201161 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" Mar 07 08:58:05 crc kubenswrapper[4761]: I0307 08:58:05.265022 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547892-pgkbb"] Mar 07 08:58:05 crc kubenswrapper[4761]: I0307 08:58:05.285302 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547892-pgkbb"] Mar 07 08:58:05 crc kubenswrapper[4761]: I0307 08:58:05.723599 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4994561-8589-40fb-92a6-20c78e23331b" path="/var/lib/kubelet/pods/a4994561-8589-40fb-92a6-20c78e23331b/volumes" Mar 07 08:58:09 crc kubenswrapper[4761]: I0307 08:58:09.706078 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:58:09 crc kubenswrapper[4761]: E0307 08:58:09.707188 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:58:21 crc kubenswrapper[4761]: I0307 08:58:21.673208 4761 scope.go:117] "RemoveContainer" containerID="16c463101ccc932f6798ea8b411f08bbb64c2a538904d22603e2d036ee3fbb54" Mar 07 08:58:22 crc kubenswrapper[4761]: I0307 08:58:22.706379 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:58:22 crc kubenswrapper[4761]: E0307 08:58:22.707059 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:58:33 crc kubenswrapper[4761]: I0307 08:58:33.715606 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:58:33 crc kubenswrapper[4761]: E0307 08:58:33.716616 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:58:44 crc kubenswrapper[4761]: I0307 08:58:44.706550 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:58:44 crc kubenswrapper[4761]: E0307 08:58:44.707234 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.244842 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vlw6n"] Mar 07 08:58:50 crc kubenswrapper[4761]: E0307 08:58:50.245930 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99bad14e-cb05-46f6-90d5-2386ee98f2f8" containerName="oc" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.245947 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="99bad14e-cb05-46f6-90d5-2386ee98f2f8" containerName="oc" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.246174 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="99bad14e-cb05-46f6-90d5-2386ee98f2f8" containerName="oc" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.247789 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.272240 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vlw6n"] Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.403856 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xztm\" (UniqueName: \"kubernetes.io/projected/67f6e451-e61d-41ba-a52d-ab78e4961c51-kube-api-access-8xztm\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.404103 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-utilities\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.404330 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-catalog-content\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.506403 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-catalog-content\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.506534 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xztm\" (UniqueName: \"kubernetes.io/projected/67f6e451-e61d-41ba-a52d-ab78e4961c51-kube-api-access-8xztm\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.506574 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-utilities\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.506986 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-catalog-content\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.507212 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-utilities\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.548029 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xztm\" (UniqueName: \"kubernetes.io/projected/67f6e451-e61d-41ba-a52d-ab78e4961c51-kube-api-access-8xztm\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.567889 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:51 crc kubenswrapper[4761]: I0307 08:58:51.125072 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vlw6n"] Mar 07 08:58:51 crc kubenswrapper[4761]: I0307 08:58:51.795992 4761 generic.go:334] "Generic (PLEG): container finished" podID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerID="0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5" exitCode=0 Mar 07 08:58:51 crc kubenswrapper[4761]: I0307 08:58:51.796311 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlw6n" event={"ID":"67f6e451-e61d-41ba-a52d-ab78e4961c51","Type":"ContainerDied","Data":"0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5"} Mar 07 08:58:51 crc kubenswrapper[4761]: I0307 08:58:51.796340 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlw6n" event={"ID":"67f6e451-e61d-41ba-a52d-ab78e4961c51","Type":"ContainerStarted","Data":"f93a2e7a8c79e9e86f680d944ab7afed5ba584e3df3641832818564679a4203f"} Mar 07 08:58:52 crc kubenswrapper[4761]: I0307 08:58:52.813561 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlw6n" event={"ID":"67f6e451-e61d-41ba-a52d-ab78e4961c51","Type":"ContainerStarted","Data":"bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522"} Mar 07 08:58:55 crc kubenswrapper[4761]: I0307 08:58:55.706532 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:58:55 crc kubenswrapper[4761]: E0307 08:58:55.707842 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:59:01 crc kubenswrapper[4761]: I0307 08:59:01.904301 4761 generic.go:334] "Generic (PLEG): container finished" podID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerID="bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522" exitCode=0 Mar 07 08:59:01 crc kubenswrapper[4761]: I0307 08:59:01.904392 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlw6n" event={"ID":"67f6e451-e61d-41ba-a52d-ab78e4961c51","Type":"ContainerDied","Data":"bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522"} Mar 07 08:59:02 crc kubenswrapper[4761]: I0307 08:59:02.917968 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlw6n" event={"ID":"67f6e451-e61d-41ba-a52d-ab78e4961c51","Type":"ContainerStarted","Data":"db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492"} Mar 07 08:59:02 crc kubenswrapper[4761]: I0307 08:59:02.938623 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vlw6n" podStartSLOduration=2.37679472 podStartE2EDuration="12.938605891s" podCreationTimestamp="2026-03-07 08:58:50 +0000 UTC" firstStartedPulling="2026-03-07 08:58:51.804966326 +0000 UTC m=+4188.714132801" lastFinishedPulling="2026-03-07 08:59:02.366777507 +0000 UTC m=+4199.275943972" observedRunningTime="2026-03-07 08:59:02.93482631 +0000 UTC m=+4199.843992785" watchObservedRunningTime="2026-03-07 08:59:02.938605891 +0000 UTC m=+4199.847772356" Mar 07 08:59:07 crc kubenswrapper[4761]: I0307 08:59:07.706160 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:59:07 crc kubenswrapper[4761]: E0307 08:59:07.707037 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:59:10 crc kubenswrapper[4761]: I0307 08:59:10.568136 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:59:10 crc kubenswrapper[4761]: I0307 08:59:10.568619 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:59:11 crc kubenswrapper[4761]: I0307 08:59:11.617609 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vlw6n" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="registry-server" probeResult="failure" output=< Mar 07 08:59:11 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:59:11 crc kubenswrapper[4761]: > Mar 07 08:59:19 crc kubenswrapper[4761]: I0307 08:59:19.706368 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:59:19 crc kubenswrapper[4761]: E0307 08:59:19.707146 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:59:20 crc kubenswrapper[4761]: I0307 08:59:20.959825 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:59:21 crc kubenswrapper[4761]: I0307 08:59:21.033279 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:59:24 crc kubenswrapper[4761]: I0307 08:59:24.446127 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vlw6n"] Mar 07 08:59:24 crc kubenswrapper[4761]: I0307 08:59:24.446911 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vlw6n" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="registry-server" containerID="cri-o://db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492" gracePeriod=2 Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.055494 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.150630 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-catalog-content\") pod \"67f6e451-e61d-41ba-a52d-ab78e4961c51\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.151032 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-utilities\") pod \"67f6e451-e61d-41ba-a52d-ab78e4961c51\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.151334 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xztm\" (UniqueName: \"kubernetes.io/projected/67f6e451-e61d-41ba-a52d-ab78e4961c51-kube-api-access-8xztm\") pod \"67f6e451-e61d-41ba-a52d-ab78e4961c51\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.151529 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-utilities" (OuterVolumeSpecName: "utilities") pod "67f6e451-e61d-41ba-a52d-ab78e4961c51" (UID: "67f6e451-e61d-41ba-a52d-ab78e4961c51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.152314 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.158028 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f6e451-e61d-41ba-a52d-ab78e4961c51-kube-api-access-8xztm" (OuterVolumeSpecName: "kube-api-access-8xztm") pod "67f6e451-e61d-41ba-a52d-ab78e4961c51" (UID: "67f6e451-e61d-41ba-a52d-ab78e4961c51"). InnerVolumeSpecName "kube-api-access-8xztm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.198761 4761 generic.go:334] "Generic (PLEG): container finished" podID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerID="db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492" exitCode=0 Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.198802 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlw6n" event={"ID":"67f6e451-e61d-41ba-a52d-ab78e4961c51","Type":"ContainerDied","Data":"db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492"} Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.198828 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlw6n" event={"ID":"67f6e451-e61d-41ba-a52d-ab78e4961c51","Type":"ContainerDied","Data":"f93a2e7a8c79e9e86f680d944ab7afed5ba584e3df3641832818564679a4203f"} Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.198834 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.198846 4761 scope.go:117] "RemoveContainer" containerID="db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.235236 4761 scope.go:117] "RemoveContainer" containerID="bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.255260 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xztm\" (UniqueName: \"kubernetes.io/projected/67f6e451-e61d-41ba-a52d-ab78e4961c51-kube-api-access-8xztm\") on node \"crc\" DevicePath \"\"" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.260754 4761 scope.go:117] "RemoveContainer" containerID="0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.311355 4761 scope.go:117] "RemoveContainer" containerID="db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492" Mar 07 08:59:25 crc kubenswrapper[4761]: E0307 08:59:25.311842 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492\": container with ID starting with db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492 not found: ID does not exist" containerID="db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.311885 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492"} err="failed to get container status \"db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492\": rpc error: code = NotFound desc = could not find container \"db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492\": container with ID starting with db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492 not found: ID does not exist" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.311910 4761 scope.go:117] "RemoveContainer" containerID="bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522" Mar 07 08:59:25 crc kubenswrapper[4761]: E0307 08:59:25.312349 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522\": container with ID starting with bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522 not found: ID does not exist" containerID="bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.312390 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522"} err="failed to get container status \"bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522\": rpc error: code = NotFound desc = could not find container \"bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522\": container with ID starting with bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522 not found: ID does not exist" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.312418 4761 scope.go:117] "RemoveContainer" containerID="0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5" Mar 07 08:59:25 crc kubenswrapper[4761]: E0307 08:59:25.312688 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5\": container with ID starting with 0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5 not found: ID does not exist" containerID="0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.312732 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5"} err="failed to get container status \"0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5\": rpc error: code = NotFound desc = could not find container \"0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5\": container with ID starting with 0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5 not found: ID does not exist" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.315936 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67f6e451-e61d-41ba-a52d-ab78e4961c51" (UID: "67f6e451-e61d-41ba-a52d-ab78e4961c51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.357688 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.542298 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vlw6n"] Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.553373 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vlw6n"] Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.723693 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" path="/var/lib/kubelet/pods/67f6e451-e61d-41ba-a52d-ab78e4961c51/volumes" Mar 07 08:59:32 crc kubenswrapper[4761]: I0307 08:59:32.705657 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:59:32 crc kubenswrapper[4761]: E0307 08:59:32.706429 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:59:43 crc kubenswrapper[4761]: I0307 08:59:43.716210 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:59:43 crc kubenswrapper[4761]: E0307 08:59:43.717600 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:59:55 crc kubenswrapper[4761]: I0307 08:59:55.706468 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:59:57 crc kubenswrapper[4761]: I0307 08:59:57.020515 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"7f80f30dd74ec8eec9d0d65df5727221eb321d5c597633536f2b7fa2b1d20fb6"} Mar 07 08:59:58 crc kubenswrapper[4761]: I0307 08:59:58.803493 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 08:59:58 crc kubenswrapper[4761]: I0307 08:59:58.808943 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.150067 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547900-4mplz"] Mar 07 09:00:00 crc kubenswrapper[4761]: E0307 09:00:00.150852 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="extract-utilities" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.150870 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="extract-utilities" Mar 07 09:00:00 crc kubenswrapper[4761]: E0307 09:00:00.150888 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="registry-server" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.150897 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="registry-server" Mar 07 09:00:00 crc kubenswrapper[4761]: E0307 09:00:00.150941 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="extract-content" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.150951 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="extract-content" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.151218 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="registry-server" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.152296 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547900-4mplz" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.155522 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.156170 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.155612 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.165940 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547900-4mplz"] Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.256430 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6"] Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.258661 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.260884 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.263678 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.270477 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6"] Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.311661 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djsk6\" (UniqueName: \"kubernetes.io/projected/d76aff1d-3203-40ca-831e-c2628cc785e5-kube-api-access-djsk6\") pod \"auto-csr-approver-29547900-4mplz\" (UID: \"d76aff1d-3203-40ca-831e-c2628cc785e5\") " pod="openshift-infra/auto-csr-approver-29547900-4mplz" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.413594 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqn4s\" (UniqueName: \"kubernetes.io/projected/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-kube-api-access-zqn4s\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.413671 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-secret-volume\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.413943 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djsk6\" (UniqueName: \"kubernetes.io/projected/d76aff1d-3203-40ca-831e-c2628cc785e5-kube-api-access-djsk6\") pod \"auto-csr-approver-29547900-4mplz\" (UID: \"d76aff1d-3203-40ca-831e-c2628cc785e5\") " pod="openshift-infra/auto-csr-approver-29547900-4mplz" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.414075 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-config-volume\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.436369 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djsk6\" (UniqueName: \"kubernetes.io/projected/d76aff1d-3203-40ca-831e-c2628cc785e5-kube-api-access-djsk6\") pod \"auto-csr-approver-29547900-4mplz\" (UID: \"d76aff1d-3203-40ca-831e-c2628cc785e5\") " pod="openshift-infra/auto-csr-approver-29547900-4mplz" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.485307 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547900-4mplz" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.516616 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqn4s\" (UniqueName: \"kubernetes.io/projected/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-kube-api-access-zqn4s\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.516765 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-secret-volume\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.516881 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-config-volume\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.518015 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-config-volume\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.521209 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-secret-volume\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.538620 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqn4s\" (UniqueName: \"kubernetes.io/projected/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-kube-api-access-zqn4s\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.580927 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: W0307 09:00:00.937582 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd76aff1d_3203_40ca_831e_c2628cc785e5.slice/crio-dd7dcd313c6826a2ab7f695730f2c8fcb316250483b7f7da23d0b6e0549fefca WatchSource:0}: Error finding container dd7dcd313c6826a2ab7f695730f2c8fcb316250483b7f7da23d0b6e0549fefca: Status 404 returned error can't find the container with id dd7dcd313c6826a2ab7f695730f2c8fcb316250483b7f7da23d0b6e0549fefca Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.941950 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547900-4mplz"] Mar 07 09:00:01 crc kubenswrapper[4761]: I0307 09:00:01.070546 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547900-4mplz" event={"ID":"d76aff1d-3203-40ca-831e-c2628cc785e5","Type":"ContainerStarted","Data":"dd7dcd313c6826a2ab7f695730f2c8fcb316250483b7f7da23d0b6e0549fefca"} Mar 07 09:00:01 crc kubenswrapper[4761]: I0307 09:00:01.131578 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6"] Mar 07 09:00:02 crc kubenswrapper[4761]: I0307 09:00:02.094229 4761 generic.go:334] "Generic (PLEG): container finished" podID="f14e6014-5089-42f4-a0c3-42b5ce2a50a6" containerID="c1dbb5bb3462fa73ce73e25db404d6fa53e16320768e271e5ea712b9a3a3878b" exitCode=0 Mar 07 09:00:02 crc kubenswrapper[4761]: I0307 09:00:02.094300 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" event={"ID":"f14e6014-5089-42f4-a0c3-42b5ce2a50a6","Type":"ContainerDied","Data":"c1dbb5bb3462fa73ce73e25db404d6fa53e16320768e271e5ea712b9a3a3878b"} Mar 07 09:00:02 crc kubenswrapper[4761]: I0307 09:00:02.094524 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" event={"ID":"f14e6014-5089-42f4-a0c3-42b5ce2a50a6","Type":"ContainerStarted","Data":"395062ad6f0cba6df8f2e9fff76dfa7be3716d89f3908642c422ee9b1217a659"} Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.529571 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.709793 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqn4s\" (UniqueName: \"kubernetes.io/projected/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-kube-api-access-zqn4s\") pod \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.710206 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-secret-volume\") pod \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.710257 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-config-volume\") pod \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.711183 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-config-volume" (OuterVolumeSpecName: "config-volume") pod "f14e6014-5089-42f4-a0c3-42b5ce2a50a6" (UID: "f14e6014-5089-42f4-a0c3-42b5ce2a50a6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.711830 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.714941 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-kube-api-access-zqn4s" (OuterVolumeSpecName: "kube-api-access-zqn4s") pod "f14e6014-5089-42f4-a0c3-42b5ce2a50a6" (UID: "f14e6014-5089-42f4-a0c3-42b5ce2a50a6"). InnerVolumeSpecName "kube-api-access-zqn4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.716887 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f14e6014-5089-42f4-a0c3-42b5ce2a50a6" (UID: "f14e6014-5089-42f4-a0c3-42b5ce2a50a6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.814671 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqn4s\" (UniqueName: \"kubernetes.io/projected/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-kube-api-access-zqn4s\") on node \"crc\" DevicePath \"\"" Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.814730 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 09:00:04 crc kubenswrapper[4761]: I0307 09:00:04.119316 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" event={"ID":"f14e6014-5089-42f4-a0c3-42b5ce2a50a6","Type":"ContainerDied","Data":"395062ad6f0cba6df8f2e9fff76dfa7be3716d89f3908642c422ee9b1217a659"} Mar 07 09:00:04 crc kubenswrapper[4761]: I0307 09:00:04.119358 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="395062ad6f0cba6df8f2e9fff76dfa7be3716d89f3908642c422ee9b1217a659" Mar 07 09:00:04 crc kubenswrapper[4761]: I0307 09:00:04.119362 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:04 crc kubenswrapper[4761]: I0307 09:00:04.609375 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn"] Mar 07 09:00:04 crc kubenswrapper[4761]: I0307 09:00:04.621052 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn"] Mar 07 09:00:05 crc kubenswrapper[4761]: I0307 09:00:05.131330 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547900-4mplz" event={"ID":"d76aff1d-3203-40ca-831e-c2628cc785e5","Type":"ContainerStarted","Data":"deae858385797443c973dbee5b25daad37118b53b46aef06b53d89d1a40ab719"} Mar 07 09:00:05 crc kubenswrapper[4761]: I0307 09:00:05.159797 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547900-4mplz" podStartSLOduration=1.714776819 podStartE2EDuration="5.159776943s" podCreationTimestamp="2026-03-07 09:00:00 +0000 UTC" firstStartedPulling="2026-03-07 09:00:00.940104387 +0000 UTC m=+4257.849270862" lastFinishedPulling="2026-03-07 09:00:04.385104511 +0000 UTC m=+4261.294270986" observedRunningTime="2026-03-07 09:00:05.147192348 +0000 UTC m=+4262.056358813" watchObservedRunningTime="2026-03-07 09:00:05.159776943 +0000 UTC m=+4262.068943418" Mar 07 09:00:05 crc kubenswrapper[4761]: I0307 09:00:05.720400 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="840f778c-fb9b-4f24-b884-fb58aa298ad5" path="/var/lib/kubelet/pods/840f778c-fb9b-4f24-b884-fb58aa298ad5/volumes" Mar 07 09:00:06 crc kubenswrapper[4761]: I0307 09:00:06.143710 4761 generic.go:334] "Generic (PLEG): container finished" podID="d76aff1d-3203-40ca-831e-c2628cc785e5" containerID="deae858385797443c973dbee5b25daad37118b53b46aef06b53d89d1a40ab719" exitCode=0 Mar 07 09:00:06 crc kubenswrapper[4761]: I0307 09:00:06.143960 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547900-4mplz" event={"ID":"d76aff1d-3203-40ca-831e-c2628cc785e5","Type":"ContainerDied","Data":"deae858385797443c973dbee5b25daad37118b53b46aef06b53d89d1a40ab719"} Mar 07 09:00:08 crc kubenswrapper[4761]: I0307 09:00:08.167361 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547900-4mplz" event={"ID":"d76aff1d-3203-40ca-831e-c2628cc785e5","Type":"ContainerDied","Data":"dd7dcd313c6826a2ab7f695730f2c8fcb316250483b7f7da23d0b6e0549fefca"} Mar 07 09:00:08 crc kubenswrapper[4761]: I0307 09:00:08.167634 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd7dcd313c6826a2ab7f695730f2c8fcb316250483b7f7da23d0b6e0549fefca" Mar 07 09:00:08 crc kubenswrapper[4761]: I0307 09:00:08.258472 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547900-4mplz" Mar 07 09:00:08 crc kubenswrapper[4761]: I0307 09:00:08.435918 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djsk6\" (UniqueName: \"kubernetes.io/projected/d76aff1d-3203-40ca-831e-c2628cc785e5-kube-api-access-djsk6\") pod \"d76aff1d-3203-40ca-831e-c2628cc785e5\" (UID: \"d76aff1d-3203-40ca-831e-c2628cc785e5\") " Mar 07 09:00:08 crc kubenswrapper[4761]: I0307 09:00:08.442455 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d76aff1d-3203-40ca-831e-c2628cc785e5-kube-api-access-djsk6" (OuterVolumeSpecName: "kube-api-access-djsk6") pod "d76aff1d-3203-40ca-831e-c2628cc785e5" (UID: "d76aff1d-3203-40ca-831e-c2628cc785e5"). InnerVolumeSpecName "kube-api-access-djsk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:00:08 crc kubenswrapper[4761]: I0307 09:00:08.538838 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djsk6\" (UniqueName: \"kubernetes.io/projected/d76aff1d-3203-40ca-831e-c2628cc785e5-kube-api-access-djsk6\") on node \"crc\" DevicePath \"\"" Mar 07 09:00:09 crc kubenswrapper[4761]: I0307 09:00:09.180155 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547900-4mplz" Mar 07 09:00:09 crc kubenswrapper[4761]: I0307 09:00:09.326913 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547894-qfvdj"] Mar 07 09:00:09 crc kubenswrapper[4761]: I0307 09:00:09.340085 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547894-qfvdj"] Mar 07 09:00:09 crc kubenswrapper[4761]: I0307 09:00:09.719343 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a11aad48-1955-49bc-8682-f74ea9d9b3c7" path="/var/lib/kubelet/pods/a11aad48-1955-49bc-8682-f74ea9d9b3c7/volumes" Mar 07 09:00:21 crc kubenswrapper[4761]: I0307 09:00:21.815866 4761 scope.go:117] "RemoveContainer" containerID="badfe9d91efd9e075476e7823eeaba9a665c8e9932f391bf56cc70889344449f" Mar 07 09:00:22 crc kubenswrapper[4761]: I0307 09:00:22.272859 4761 scope.go:117] "RemoveContainer" containerID="5c05a441b88ae639cd727974fecbf1db12e7886b856bb7b0883e62ba8cee569b" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.150494 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29547901-b7kzn"] Mar 07 09:01:00 crc kubenswrapper[4761]: E0307 09:01:00.151686 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76aff1d-3203-40ca-831e-c2628cc785e5" containerName="oc" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.151706 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76aff1d-3203-40ca-831e-c2628cc785e5" containerName="oc" Mar 07 09:01:00 crc kubenswrapper[4761]: E0307 09:01:00.151771 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f14e6014-5089-42f4-a0c3-42b5ce2a50a6" containerName="collect-profiles" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.151781 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f14e6014-5089-42f4-a0c3-42b5ce2a50a6" containerName="collect-profiles" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.152081 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76aff1d-3203-40ca-831e-c2628cc785e5" containerName="oc" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.152111 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f14e6014-5089-42f4-a0c3-42b5ce2a50a6" containerName="collect-profiles" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.153246 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.164524 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29547901-b7kzn"] Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.239264 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-combined-ca-bundle\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.239333 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxmzk\" (UniqueName: \"kubernetes.io/projected/b0d8c848-14d6-46c1-a912-87673a3d974a-kube-api-access-kxmzk\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.239614 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-config-data\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.239854 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-fernet-keys\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.342361 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-combined-ca-bundle\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.342414 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxmzk\" (UniqueName: \"kubernetes.io/projected/b0d8c848-14d6-46c1-a912-87673a3d974a-kube-api-access-kxmzk\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.342472 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-config-data\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.342510 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-fernet-keys\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.349640 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-combined-ca-bundle\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.349818 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-fernet-keys\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.349843 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-config-data\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.361919 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxmzk\" (UniqueName: \"kubernetes.io/projected/b0d8c848-14d6-46c1-a912-87673a3d974a-kube-api-access-kxmzk\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.480990 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:01 crc kubenswrapper[4761]: I0307 09:01:01.011250 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29547901-b7kzn"] Mar 07 09:01:01 crc kubenswrapper[4761]: I0307 09:01:01.784861 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29547901-b7kzn" event={"ID":"b0d8c848-14d6-46c1-a912-87673a3d974a","Type":"ContainerStarted","Data":"2654869ad635b91de3f416368ec56e41d4b361c7739131017706994bea6bab69"} Mar 07 09:01:01 crc kubenswrapper[4761]: I0307 09:01:01.785338 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29547901-b7kzn" event={"ID":"b0d8c848-14d6-46c1-a912-87673a3d974a","Type":"ContainerStarted","Data":"e904103999e59efbc9ae66d4e2bca87e7a198558dada35117e97a316b31e36b2"} Mar 07 09:01:06 crc kubenswrapper[4761]: I0307 09:01:06.858392 4761 generic.go:334] "Generic (PLEG): container finished" podID="b0d8c848-14d6-46c1-a912-87673a3d974a" containerID="2654869ad635b91de3f416368ec56e41d4b361c7739131017706994bea6bab69" exitCode=0 Mar 07 09:01:06 crc kubenswrapper[4761]: I0307 09:01:06.858441 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29547901-b7kzn" event={"ID":"b0d8c848-14d6-46c1-a912-87673a3d974a","Type":"ContainerDied","Data":"2654869ad635b91de3f416368ec56e41d4b361c7739131017706994bea6bab69"} Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.320065 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.364926 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-config-data\") pod \"b0d8c848-14d6-46c1-a912-87673a3d974a\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.365406 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-fernet-keys\") pod \"b0d8c848-14d6-46c1-a912-87673a3d974a\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.365501 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxmzk\" (UniqueName: \"kubernetes.io/projected/b0d8c848-14d6-46c1-a912-87673a3d974a-kube-api-access-kxmzk\") pod \"b0d8c848-14d6-46c1-a912-87673a3d974a\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.365592 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-combined-ca-bundle\") pod \"b0d8c848-14d6-46c1-a912-87673a3d974a\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.373169 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d8c848-14d6-46c1-a912-87673a3d974a-kube-api-access-kxmzk" (OuterVolumeSpecName: "kube-api-access-kxmzk") pod "b0d8c848-14d6-46c1-a912-87673a3d974a" (UID: "b0d8c848-14d6-46c1-a912-87673a3d974a"). InnerVolumeSpecName "kube-api-access-kxmzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.373917 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b0d8c848-14d6-46c1-a912-87673a3d974a" (UID: "b0d8c848-14d6-46c1-a912-87673a3d974a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.412911 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0d8c848-14d6-46c1-a912-87673a3d974a" (UID: "b0d8c848-14d6-46c1-a912-87673a3d974a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.445607 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-config-data" (OuterVolumeSpecName: "config-data") pod "b0d8c848-14d6-46c1-a912-87673a3d974a" (UID: "b0d8c848-14d6-46c1-a912-87673a3d974a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.469585 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.469622 4761 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.469635 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxmzk\" (UniqueName: \"kubernetes.io/projected/b0d8c848-14d6-46c1-a912-87673a3d974a-kube-api-access-kxmzk\") on node \"crc\" DevicePath \"\"" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.469648 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.879035 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29547901-b7kzn" event={"ID":"b0d8c848-14d6-46c1-a912-87673a3d974a","Type":"ContainerDied","Data":"e904103999e59efbc9ae66d4e2bca87e7a198558dada35117e97a316b31e36b2"} Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.879075 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e904103999e59efbc9ae66d4e2bca87e7a198558dada35117e97a316b31e36b2" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.879196 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.156929 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547902-2ntd7"] Mar 07 09:02:00 crc kubenswrapper[4761]: E0307 09:02:00.157842 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d8c848-14d6-46c1-a912-87673a3d974a" containerName="keystone-cron" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.157855 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d8c848-14d6-46c1-a912-87673a3d974a" containerName="keystone-cron" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.158099 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d8c848-14d6-46c1-a912-87673a3d974a" containerName="keystone-cron" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.159040 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547902-2ntd7" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.161669 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.161777 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.165875 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.184861 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547902-2ntd7"] Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.293858 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdw6p\" (UniqueName: \"kubernetes.io/projected/8bc7c313-fe46-4bb6-ac32-7b2e93f98c63-kube-api-access-zdw6p\") pod \"auto-csr-approver-29547902-2ntd7\" (UID: \"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63\") " pod="openshift-infra/auto-csr-approver-29547902-2ntd7" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.397298 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdw6p\" (UniqueName: \"kubernetes.io/projected/8bc7c313-fe46-4bb6-ac32-7b2e93f98c63-kube-api-access-zdw6p\") pod \"auto-csr-approver-29547902-2ntd7\" (UID: \"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63\") " pod="openshift-infra/auto-csr-approver-29547902-2ntd7" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.422204 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdw6p\" (UniqueName: \"kubernetes.io/projected/8bc7c313-fe46-4bb6-ac32-7b2e93f98c63-kube-api-access-zdw6p\") pod \"auto-csr-approver-29547902-2ntd7\" (UID: \"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63\") " pod="openshift-infra/auto-csr-approver-29547902-2ntd7" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.483039 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547902-2ntd7" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.935811 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547902-2ntd7"] Mar 07 09:02:01 crc kubenswrapper[4761]: I0307 09:02:01.753708 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547902-2ntd7" event={"ID":"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63","Type":"ContainerStarted","Data":"3c59b665820efc175cce3097f826c7b48d338f85856df3a62c9419ef87e38c4f"} Mar 07 09:02:02 crc kubenswrapper[4761]: I0307 09:02:02.770886 4761 generic.go:334] "Generic (PLEG): container finished" podID="8bc7c313-fe46-4bb6-ac32-7b2e93f98c63" containerID="6eafd66faadd47449571d768d116121ea726b6ad2cc3e26ec17adb107e02d96b" exitCode=0 Mar 07 09:02:02 crc kubenswrapper[4761]: I0307 09:02:02.770983 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547902-2ntd7" event={"ID":"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63","Type":"ContainerDied","Data":"6eafd66faadd47449571d768d116121ea726b6ad2cc3e26ec17adb107e02d96b"} Mar 07 09:02:04 crc kubenswrapper[4761]: I0307 09:02:04.188077 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547902-2ntd7" Mar 07 09:02:04 crc kubenswrapper[4761]: I0307 09:02:04.387473 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdw6p\" (UniqueName: \"kubernetes.io/projected/8bc7c313-fe46-4bb6-ac32-7b2e93f98c63-kube-api-access-zdw6p\") pod \"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63\" (UID: \"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63\") " Mar 07 09:02:04 crc kubenswrapper[4761]: I0307 09:02:04.395613 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bc7c313-fe46-4bb6-ac32-7b2e93f98c63-kube-api-access-zdw6p" (OuterVolumeSpecName: "kube-api-access-zdw6p") pod "8bc7c313-fe46-4bb6-ac32-7b2e93f98c63" (UID: "8bc7c313-fe46-4bb6-ac32-7b2e93f98c63"). InnerVolumeSpecName "kube-api-access-zdw6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:02:04 crc kubenswrapper[4761]: I0307 09:02:04.491534 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdw6p\" (UniqueName: \"kubernetes.io/projected/8bc7c313-fe46-4bb6-ac32-7b2e93f98c63-kube-api-access-zdw6p\") on node \"crc\" DevicePath \"\"" Mar 07 09:02:04 crc kubenswrapper[4761]: I0307 09:02:04.800429 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547902-2ntd7" event={"ID":"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63","Type":"ContainerDied","Data":"3c59b665820efc175cce3097f826c7b48d338f85856df3a62c9419ef87e38c4f"} Mar 07 09:02:04 crc kubenswrapper[4761]: I0307 09:02:04.800468 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c59b665820efc175cce3097f826c7b48d338f85856df3a62c9419ef87e38c4f" Mar 07 09:02:04 crc kubenswrapper[4761]: I0307 09:02:04.800527 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547902-2ntd7" Mar 07 09:02:05 crc kubenswrapper[4761]: I0307 09:02:05.269749 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547896-k5cdv"] Mar 07 09:02:05 crc kubenswrapper[4761]: I0307 09:02:05.279450 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547896-k5cdv"] Mar 07 09:02:05 crc kubenswrapper[4761]: I0307 09:02:05.722380 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b" path="/var/lib/kubelet/pods/9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b/volumes" Mar 07 09:02:13 crc kubenswrapper[4761]: I0307 09:02:13.768230 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:02:13 crc kubenswrapper[4761]: I0307 09:02:13.771650 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:02:22 crc kubenswrapper[4761]: I0307 09:02:22.410196 4761 scope.go:117] "RemoveContainer" containerID="a3416b57302e9385064dac8130cfb79a1b591b5d67e31cc81d62ed5c3454a4fe" Mar 07 09:02:43 crc kubenswrapper[4761]: I0307 09:02:43.767932 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:02:43 crc kubenswrapper[4761]: I0307 09:02:43.768428 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.768190 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.768731 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.768787 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.769826 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f80f30dd74ec8eec9d0d65df5727221eb321d5c597633536f2b7fa2b1d20fb6"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.769894 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://7f80f30dd74ec8eec9d0d65df5727221eb321d5c597633536f2b7fa2b1d20fb6" gracePeriod=600 Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.961791 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="7f80f30dd74ec8eec9d0d65df5727221eb321d5c597633536f2b7fa2b1d20fb6" exitCode=0 Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.961838 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"7f80f30dd74ec8eec9d0d65df5727221eb321d5c597633536f2b7fa2b1d20fb6"} Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.961881 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 09:03:15 crc kubenswrapper[4761]: I0307 09:03:15.994045 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823"} Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.147095 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547904-tqsph"] Mar 07 09:04:00 crc kubenswrapper[4761]: E0307 09:04:00.148081 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bc7c313-fe46-4bb6-ac32-7b2e93f98c63" containerName="oc" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.148101 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bc7c313-fe46-4bb6-ac32-7b2e93f98c63" containerName="oc" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.148372 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bc7c313-fe46-4bb6-ac32-7b2e93f98c63" containerName="oc" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.149614 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547904-tqsph" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.152052 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.152292 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.152826 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.161500 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547904-tqsph"] Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.240569 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmlwr\" (UniqueName: \"kubernetes.io/projected/3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9-kube-api-access-fmlwr\") pod \"auto-csr-approver-29547904-tqsph\" (UID: \"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9\") " pod="openshift-infra/auto-csr-approver-29547904-tqsph" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.343582 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmlwr\" (UniqueName: \"kubernetes.io/projected/3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9-kube-api-access-fmlwr\") pod \"auto-csr-approver-29547904-tqsph\" (UID: \"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9\") " pod="openshift-infra/auto-csr-approver-29547904-tqsph" Mar 07 09:04:01 crc kubenswrapper[4761]: I0307 09:04:01.008834 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmlwr\" (UniqueName: \"kubernetes.io/projected/3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9-kube-api-access-fmlwr\") pod \"auto-csr-approver-29547904-tqsph\" (UID: \"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9\") " pod="openshift-infra/auto-csr-approver-29547904-tqsph" Mar 07 09:04:01 crc kubenswrapper[4761]: I0307 09:04:01.073628 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547904-tqsph" Mar 07 09:04:01 crc kubenswrapper[4761]: I0307 09:04:01.610832 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547904-tqsph"] Mar 07 09:04:01 crc kubenswrapper[4761]: I0307 09:04:01.611988 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:04:02 crc kubenswrapper[4761]: I0307 09:04:02.550936 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547904-tqsph" event={"ID":"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9","Type":"ContainerStarted","Data":"4bafcdbff5ed6362b7206177ece7790184d8b43db799074580dd568f5d1e98cd"} Mar 07 09:04:03 crc kubenswrapper[4761]: I0307 09:04:03.563053 4761 generic.go:334] "Generic (PLEG): container finished" podID="3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9" containerID="751e663a953c9621b6f7e8bbf8ccfd3bd89e09b8c2183de735e7eaa4b9bffba7" exitCode=0 Mar 07 09:04:03 crc kubenswrapper[4761]: I0307 09:04:03.563151 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547904-tqsph" event={"ID":"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9","Type":"ContainerDied","Data":"751e663a953c9621b6f7e8bbf8ccfd3bd89e09b8c2183de735e7eaa4b9bffba7"} Mar 07 09:04:04 crc kubenswrapper[4761]: I0307 09:04:04.993827 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547904-tqsph" Mar 07 09:04:05 crc kubenswrapper[4761]: I0307 09:04:05.074862 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmlwr\" (UniqueName: \"kubernetes.io/projected/3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9-kube-api-access-fmlwr\") pod \"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9\" (UID: \"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9\") " Mar 07 09:04:05 crc kubenswrapper[4761]: I0307 09:04:05.084927 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9-kube-api-access-fmlwr" (OuterVolumeSpecName: "kube-api-access-fmlwr") pod "3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9" (UID: "3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9"). InnerVolumeSpecName "kube-api-access-fmlwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:04:05 crc kubenswrapper[4761]: I0307 09:04:05.177771 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmlwr\" (UniqueName: \"kubernetes.io/projected/3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9-kube-api-access-fmlwr\") on node \"crc\" DevicePath \"\"" Mar 07 09:04:05 crc kubenswrapper[4761]: I0307 09:04:05.613831 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547904-tqsph" event={"ID":"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9","Type":"ContainerDied","Data":"4bafcdbff5ed6362b7206177ece7790184d8b43db799074580dd568f5d1e98cd"} Mar 07 09:04:05 crc kubenswrapper[4761]: I0307 09:04:05.613877 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bafcdbff5ed6362b7206177ece7790184d8b43db799074580dd568f5d1e98cd" Mar 07 09:04:05 crc kubenswrapper[4761]: I0307 09:04:05.613908 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547904-tqsph" Mar 07 09:04:06 crc kubenswrapper[4761]: I0307 09:04:06.090873 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547898-vn4zd"] Mar 07 09:04:06 crc kubenswrapper[4761]: I0307 09:04:06.110285 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547898-vn4zd"] Mar 07 09:04:07 crc kubenswrapper[4761]: I0307 09:04:07.720702 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99bad14e-cb05-46f6-90d5-2386ee98f2f8" path="/var/lib/kubelet/pods/99bad14e-cb05-46f6-90d5-2386ee98f2f8/volumes" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.100204 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8zz6m"] Mar 07 09:04:21 crc kubenswrapper[4761]: E0307 09:04:21.101310 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9" containerName="oc" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.101330 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9" containerName="oc" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.101625 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9" containerName="oc" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.104278 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.122522 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zz6m"] Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.214544 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-catalog-content\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.214590 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-utilities\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.214982 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv6b2\" (UniqueName: \"kubernetes.io/projected/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-kube-api-access-nv6b2\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.317295 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-catalog-content\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.317354 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-utilities\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.317645 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv6b2\" (UniqueName: \"kubernetes.io/projected/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-kube-api-access-nv6b2\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.317903 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-catalog-content\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.317957 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-utilities\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.337692 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv6b2\" (UniqueName: \"kubernetes.io/projected/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-kube-api-access-nv6b2\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.432378 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.923200 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zz6m"] Mar 07 09:04:22 crc kubenswrapper[4761]: I0307 09:04:22.550277 4761 scope.go:117] "RemoveContainer" containerID="b4d647aca9c63bfa93e553e6736bab3284efe222b6b79d764f3826cf7b8a38e6" Mar 07 09:04:22 crc kubenswrapper[4761]: I0307 09:04:22.821221 4761 generic.go:334] "Generic (PLEG): container finished" podID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerID="1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53" exitCode=0 Mar 07 09:04:22 crc kubenswrapper[4761]: I0307 09:04:22.821459 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zz6m" event={"ID":"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f","Type":"ContainerDied","Data":"1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53"} Mar 07 09:04:22 crc kubenswrapper[4761]: I0307 09:04:22.821605 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zz6m" event={"ID":"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f","Type":"ContainerStarted","Data":"a11a06bdc888f74864acf1332f4312e1814277a83b44fdc5ced1849205f8143e"} Mar 07 09:04:23 crc kubenswrapper[4761]: I0307 09:04:23.866941 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zz6m" event={"ID":"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f","Type":"ContainerStarted","Data":"a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2"} Mar 07 09:04:24 crc kubenswrapper[4761]: I0307 09:04:24.882577 4761 generic.go:334] "Generic (PLEG): container finished" podID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerID="a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2" exitCode=0 Mar 07 09:04:24 crc kubenswrapper[4761]: I0307 09:04:24.882802 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zz6m" event={"ID":"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f","Type":"ContainerDied","Data":"a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2"} Mar 07 09:04:26 crc kubenswrapper[4761]: I0307 09:04:26.915062 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zz6m" event={"ID":"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f","Type":"ContainerStarted","Data":"8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0"} Mar 07 09:04:26 crc kubenswrapper[4761]: I0307 09:04:26.941159 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8zz6m" podStartSLOduration=2.798481779 podStartE2EDuration="5.941117232s" podCreationTimestamp="2026-03-07 09:04:21 +0000 UTC" firstStartedPulling="2026-03-07 09:04:22.823256854 +0000 UTC m=+4519.732423369" lastFinishedPulling="2026-03-07 09:04:25.965892347 +0000 UTC m=+4522.875058822" observedRunningTime="2026-03-07 09:04:26.931466468 +0000 UTC m=+4523.840632953" watchObservedRunningTime="2026-03-07 09:04:26.941117232 +0000 UTC m=+4523.850283707" Mar 07 09:04:31 crc kubenswrapper[4761]: I0307 09:04:31.432605 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:31 crc kubenswrapper[4761]: I0307 09:04:31.433131 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:31 crc kubenswrapper[4761]: I0307 09:04:31.497298 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:32 crc kubenswrapper[4761]: I0307 09:04:32.047894 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:32 crc kubenswrapper[4761]: I0307 09:04:32.104288 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zz6m"] Mar 07 09:04:33 crc kubenswrapper[4761]: I0307 09:04:33.995144 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8zz6m" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="registry-server" containerID="cri-o://8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0" gracePeriod=2 Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.541589 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.666608 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv6b2\" (UniqueName: \"kubernetes.io/projected/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-kube-api-access-nv6b2\") pod \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.666908 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-utilities\") pod \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.667021 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-catalog-content\") pod \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.668029 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-utilities" (OuterVolumeSpecName: "utilities") pod "7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" (UID: "7d2f6bb7-9eec-42eb-99bf-99ce452fa52f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.754346 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" (UID: "7d2f6bb7-9eec-42eb-99bf-99ce452fa52f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.770091 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.770354 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.006616 4761 generic.go:334] "Generic (PLEG): container finished" podID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerID="8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0" exitCode=0 Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.006699 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.006699 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zz6m" event={"ID":"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f","Type":"ContainerDied","Data":"8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0"} Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.006793 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zz6m" event={"ID":"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f","Type":"ContainerDied","Data":"a11a06bdc888f74864acf1332f4312e1814277a83b44fdc5ced1849205f8143e"} Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.006819 4761 scope.go:117] "RemoveContainer" containerID="8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.035310 4761 scope.go:117] "RemoveContainer" containerID="a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.201940 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-kube-api-access-nv6b2" (OuterVolumeSpecName: "kube-api-access-nv6b2") pod "7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" (UID: "7d2f6bb7-9eec-42eb-99bf-99ce452fa52f"). InnerVolumeSpecName "kube-api-access-nv6b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.214878 4761 scope.go:117] "RemoveContainer" containerID="1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.281541 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv6b2\" (UniqueName: \"kubernetes.io/projected/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-kube-api-access-nv6b2\") on node \"crc\" DevicePath \"\"" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.440487 4761 scope.go:117] "RemoveContainer" containerID="8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0" Mar 07 09:04:35 crc kubenswrapper[4761]: E0307 09:04:35.440944 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0\": container with ID starting with 8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0 not found: ID does not exist" containerID="8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.440970 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0"} err="failed to get container status \"8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0\": rpc error: code = NotFound desc = could not find container \"8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0\": container with ID starting with 8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0 not found: ID does not exist" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.440997 4761 scope.go:117] "RemoveContainer" containerID="a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2" Mar 07 09:04:35 crc kubenswrapper[4761]: E0307 09:04:35.441664 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2\": container with ID starting with a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2 not found: ID does not exist" containerID="a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.441690 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2"} err="failed to get container status \"a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2\": rpc error: code = NotFound desc = could not find container \"a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2\": container with ID starting with a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2 not found: ID does not exist" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.441731 4761 scope.go:117] "RemoveContainer" containerID="1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53" Mar 07 09:04:35 crc kubenswrapper[4761]: E0307 09:04:35.443131 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53\": container with ID starting with 1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53 not found: ID does not exist" containerID="1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.443203 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53"} err="failed to get container status \"1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53\": rpc error: code = NotFound desc = could not find container \"1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53\": container with ID starting with 1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53 not found: ID does not exist" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.504786 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zz6m"] Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.517357 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zz6m"] Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.725099 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" path="/var/lib/kubelet/pods/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f/volumes" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.280122 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fqn5q"] Mar 07 09:05:24 crc kubenswrapper[4761]: E0307 09:05:24.281420 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="registry-server" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.281438 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="registry-server" Mar 07 09:05:24 crc kubenswrapper[4761]: E0307 09:05:24.281457 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="extract-content" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.281465 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="extract-content" Mar 07 09:05:24 crc kubenswrapper[4761]: E0307 09:05:24.281518 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="extract-utilities" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.281529 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="extract-utilities" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.281823 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="registry-server" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.284318 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.312146 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fqn5q"] Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.455955 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69vfl\" (UniqueName: \"kubernetes.io/projected/d49a1026-3de3-46bc-9f9a-21bc9f85e744-kube-api-access-69vfl\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.456015 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-utilities\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.456129 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-catalog-content\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.466677 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bqzm8"] Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.470622 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.485929 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqzm8"] Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.558583 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-catalog-content\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.558639 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69vfl\" (UniqueName: \"kubernetes.io/projected/d49a1026-3de3-46bc-9f9a-21bc9f85e744-kube-api-access-69vfl\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.558663 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-utilities\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.558683 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-utilities\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.558742 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4p5s\" (UniqueName: \"kubernetes.io/projected/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-kube-api-access-g4p5s\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.558836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-catalog-content\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.559353 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-catalog-content\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.559756 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-utilities\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.586861 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69vfl\" (UniqueName: \"kubernetes.io/projected/d49a1026-3de3-46bc-9f9a-21bc9f85e744-kube-api-access-69vfl\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.606152 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.663013 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-catalog-content\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.663074 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-utilities\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.663145 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4p5s\" (UniqueName: \"kubernetes.io/projected/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-kube-api-access-g4p5s\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.663798 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-utilities\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.663864 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-catalog-content\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.680916 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4p5s\" (UniqueName: \"kubernetes.io/projected/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-kube-api-access-g4p5s\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.804476 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:25 crc kubenswrapper[4761]: I0307 09:05:25.176980 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fqn5q"] Mar 07 09:05:25 crc kubenswrapper[4761]: I0307 09:05:25.567406 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqzm8"] Mar 07 09:05:25 crc kubenswrapper[4761]: W0307 09:05:25.576867 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod078e6a71_3145_4fc1_a2d8_24cf3dc66ed6.slice/crio-fbc8b331e89a42aa1e6b02001fa32036bdf12f4a2a68d9078ea24843483041ea WatchSource:0}: Error finding container fbc8b331e89a42aa1e6b02001fa32036bdf12f4a2a68d9078ea24843483041ea: Status 404 returned error can't find the container with id fbc8b331e89a42aa1e6b02001fa32036bdf12f4a2a68d9078ea24843483041ea Mar 07 09:05:25 crc kubenswrapper[4761]: I0307 09:05:25.610600 4761 generic.go:334] "Generic (PLEG): container finished" podID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerID="bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794" exitCode=0 Mar 07 09:05:25 crc kubenswrapper[4761]: I0307 09:05:25.610693 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqn5q" event={"ID":"d49a1026-3de3-46bc-9f9a-21bc9f85e744","Type":"ContainerDied","Data":"bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794"} Mar 07 09:05:25 crc kubenswrapper[4761]: I0307 09:05:25.610742 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqn5q" event={"ID":"d49a1026-3de3-46bc-9f9a-21bc9f85e744","Type":"ContainerStarted","Data":"d53230d1881acb9f6851b4291ebc2b45db532900ea94aa0ffb25d6dbb7c56f89"} Mar 07 09:05:25 crc kubenswrapper[4761]: I0307 09:05:25.612310 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqzm8" event={"ID":"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6","Type":"ContainerStarted","Data":"fbc8b331e89a42aa1e6b02001fa32036bdf12f4a2a68d9078ea24843483041ea"} Mar 07 09:05:26 crc kubenswrapper[4761]: I0307 09:05:26.627064 4761 generic.go:334] "Generic (PLEG): container finished" podID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerID="4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120" exitCode=0 Mar 07 09:05:26 crc kubenswrapper[4761]: I0307 09:05:26.627385 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqzm8" event={"ID":"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6","Type":"ContainerDied","Data":"4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120"} Mar 07 09:05:27 crc kubenswrapper[4761]: I0307 09:05:27.641285 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqn5q" event={"ID":"d49a1026-3de3-46bc-9f9a-21bc9f85e744","Type":"ContainerStarted","Data":"233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf"} Mar 07 09:05:28 crc kubenswrapper[4761]: I0307 09:05:28.670048 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqzm8" event={"ID":"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6","Type":"ContainerStarted","Data":"f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de"} Mar 07 09:05:29 crc kubenswrapper[4761]: I0307 09:05:29.684823 4761 generic.go:334] "Generic (PLEG): container finished" podID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerID="233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf" exitCode=0 Mar 07 09:05:29 crc kubenswrapper[4761]: I0307 09:05:29.684885 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqn5q" event={"ID":"d49a1026-3de3-46bc-9f9a-21bc9f85e744","Type":"ContainerDied","Data":"233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf"} Mar 07 09:05:30 crc kubenswrapper[4761]: I0307 09:05:30.729747 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqn5q" event={"ID":"d49a1026-3de3-46bc-9f9a-21bc9f85e744","Type":"ContainerStarted","Data":"60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108"} Mar 07 09:05:30 crc kubenswrapper[4761]: I0307 09:05:30.732961 4761 generic.go:334] "Generic (PLEG): container finished" podID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerID="f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de" exitCode=0 Mar 07 09:05:30 crc kubenswrapper[4761]: I0307 09:05:30.733096 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqzm8" event={"ID":"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6","Type":"ContainerDied","Data":"f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de"} Mar 07 09:05:30 crc kubenswrapper[4761]: I0307 09:05:30.765877 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fqn5q" podStartSLOduration=2.129191207 podStartE2EDuration="6.765853012s" podCreationTimestamp="2026-03-07 09:05:24 +0000 UTC" firstStartedPulling="2026-03-07 09:05:25.61291065 +0000 UTC m=+4582.522077125" lastFinishedPulling="2026-03-07 09:05:30.249572455 +0000 UTC m=+4587.158738930" observedRunningTime="2026-03-07 09:05:30.764407397 +0000 UTC m=+4587.673573892" watchObservedRunningTime="2026-03-07 09:05:30.765853012 +0000 UTC m=+4587.675019487" Mar 07 09:05:32 crc kubenswrapper[4761]: I0307 09:05:32.763861 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqzm8" event={"ID":"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6","Type":"ContainerStarted","Data":"3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1"} Mar 07 09:05:32 crc kubenswrapper[4761]: I0307 09:05:32.785779 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bqzm8" podStartSLOduration=4.242381091 podStartE2EDuration="8.785733375s" podCreationTimestamp="2026-03-07 09:05:24 +0000 UTC" firstStartedPulling="2026-03-07 09:05:26.629407714 +0000 UTC m=+4583.538574189" lastFinishedPulling="2026-03-07 09:05:31.172759998 +0000 UTC m=+4588.081926473" observedRunningTime="2026-03-07 09:05:32.783426799 +0000 UTC m=+4589.692593304" watchObservedRunningTime="2026-03-07 09:05:32.785733375 +0000 UTC m=+4589.694899890" Mar 07 09:05:34 crc kubenswrapper[4761]: I0307 09:05:34.606328 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:34 crc kubenswrapper[4761]: I0307 09:05:34.606659 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:34 crc kubenswrapper[4761]: I0307 09:05:34.805459 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:34 crc kubenswrapper[4761]: I0307 09:05:34.805783 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:35 crc kubenswrapper[4761]: I0307 09:05:35.667756 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fqn5q" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="registry-server" probeResult="failure" output=< Mar 07 09:05:35 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:05:35 crc kubenswrapper[4761]: > Mar 07 09:05:35 crc kubenswrapper[4761]: I0307 09:05:35.860079 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bqzm8" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="registry-server" probeResult="failure" output=< Mar 07 09:05:35 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:05:35 crc kubenswrapper[4761]: > Mar 07 09:05:43 crc kubenswrapper[4761]: I0307 09:05:43.768656 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:05:43 crc kubenswrapper[4761]: I0307 09:05:43.769582 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:05:44 crc kubenswrapper[4761]: I0307 09:05:44.683281 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:44 crc kubenswrapper[4761]: I0307 09:05:44.768368 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:44 crc kubenswrapper[4761]: I0307 09:05:44.913319 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:44 crc kubenswrapper[4761]: I0307 09:05:44.943814 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fqn5q"] Mar 07 09:05:44 crc kubenswrapper[4761]: I0307 09:05:44.965575 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:45 crc kubenswrapper[4761]: I0307 09:05:45.913620 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fqn5q" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="registry-server" containerID="cri-o://60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108" gracePeriod=2 Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.787939 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.844078 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-utilities\") pod \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.844328 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-catalog-content\") pod \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.844441 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69vfl\" (UniqueName: \"kubernetes.io/projected/d49a1026-3de3-46bc-9f9a-21bc9f85e744-kube-api-access-69vfl\") pod \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.845161 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-utilities" (OuterVolumeSpecName: "utilities") pod "d49a1026-3de3-46bc-9f9a-21bc9f85e744" (UID: "d49a1026-3de3-46bc-9f9a-21bc9f85e744"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.847022 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.855581 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49a1026-3de3-46bc-9f9a-21bc9f85e744-kube-api-access-69vfl" (OuterVolumeSpecName: "kube-api-access-69vfl") pod "d49a1026-3de3-46bc-9f9a-21bc9f85e744" (UID: "d49a1026-3de3-46bc-9f9a-21bc9f85e744"). InnerVolumeSpecName "kube-api-access-69vfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.905561 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d49a1026-3de3-46bc-9f9a-21bc9f85e744" (UID: "d49a1026-3de3-46bc-9f9a-21bc9f85e744"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.926971 4761 generic.go:334] "Generic (PLEG): container finished" podID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerID="60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108" exitCode=0 Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.927028 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.927031 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqn5q" event={"ID":"d49a1026-3de3-46bc-9f9a-21bc9f85e744","Type":"ContainerDied","Data":"60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108"} Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.927097 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqn5q" event={"ID":"d49a1026-3de3-46bc-9f9a-21bc9f85e744","Type":"ContainerDied","Data":"d53230d1881acb9f6851b4291ebc2b45db532900ea94aa0ffb25d6dbb7c56f89"} Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.927119 4761 scope.go:117] "RemoveContainer" containerID="60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.948886 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.948916 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69vfl\" (UniqueName: \"kubernetes.io/projected/d49a1026-3de3-46bc-9f9a-21bc9f85e744-kube-api-access-69vfl\") on node \"crc\" DevicePath \"\"" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.968921 4761 scope.go:117] "RemoveContainer" containerID="233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.971133 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fqn5q"] Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.985372 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fqn5q"] Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.002638 4761 scope.go:117] "RemoveContainer" containerID="bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.051555 4761 scope.go:117] "RemoveContainer" containerID="60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108" Mar 07 09:05:47 crc kubenswrapper[4761]: E0307 09:05:47.058126 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108\": container with ID starting with 60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108 not found: ID does not exist" containerID="60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.058180 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108"} err="failed to get container status \"60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108\": rpc error: code = NotFound desc = could not find container \"60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108\": container with ID starting with 60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108 not found: ID does not exist" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.058205 4761 scope.go:117] "RemoveContainer" containerID="233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf" Mar 07 09:05:47 crc kubenswrapper[4761]: E0307 09:05:47.061849 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf\": container with ID starting with 233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf not found: ID does not exist" containerID="233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.061878 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf"} err="failed to get container status \"233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf\": rpc error: code = NotFound desc = could not find container \"233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf\": container with ID starting with 233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf not found: ID does not exist" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.061893 4761 scope.go:117] "RemoveContainer" containerID="bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794" Mar 07 09:05:47 crc kubenswrapper[4761]: E0307 09:05:47.063103 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794\": container with ID starting with bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794 not found: ID does not exist" containerID="bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.063154 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794"} err="failed to get container status \"bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794\": rpc error: code = NotFound desc = could not find container \"bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794\": container with ID starting with bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794 not found: ID does not exist" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.330955 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqzm8"] Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.331551 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bqzm8" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="registry-server" containerID="cri-o://3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1" gracePeriod=2 Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.730929 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" path="/var/lib/kubelet/pods/d49a1026-3de3-46bc-9f9a-21bc9f85e744/volumes" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.862896 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.947277 4761 generic.go:334] "Generic (PLEG): container finished" podID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerID="3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1" exitCode=0 Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.947325 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqzm8" event={"ID":"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6","Type":"ContainerDied","Data":"3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1"} Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.947356 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqzm8" event={"ID":"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6","Type":"ContainerDied","Data":"fbc8b331e89a42aa1e6b02001fa32036bdf12f4a2a68d9078ea24843483041ea"} Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.947379 4761 scope.go:117] "RemoveContainer" containerID="3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.947415 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.970875 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4p5s\" (UniqueName: \"kubernetes.io/projected/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-kube-api-access-g4p5s\") pod \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.971123 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-utilities\") pod \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.971208 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-catalog-content\") pod \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.971387 4761 scope.go:117] "RemoveContainer" containerID="f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.972237 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-utilities" (OuterVolumeSpecName: "utilities") pod "078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" (UID: "078e6a71-3145-4fc1-a2d8-24cf3dc66ed6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.978101 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-kube-api-access-g4p5s" (OuterVolumeSpecName: "kube-api-access-g4p5s") pod "078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" (UID: "078e6a71-3145-4fc1-a2d8-24cf3dc66ed6"). InnerVolumeSpecName "kube-api-access-g4p5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.994800 4761 scope.go:117] "RemoveContainer" containerID="4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.034829 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" (UID: "078e6a71-3145-4fc1-a2d8-24cf3dc66ed6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.053753 4761 scope.go:117] "RemoveContainer" containerID="3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1" Mar 07 09:05:48 crc kubenswrapper[4761]: E0307 09:05:48.054448 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1\": container with ID starting with 3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1 not found: ID does not exist" containerID="3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.054845 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1"} err="failed to get container status \"3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1\": rpc error: code = NotFound desc = could not find container \"3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1\": container with ID starting with 3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1 not found: ID does not exist" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.054904 4761 scope.go:117] "RemoveContainer" containerID="f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de" Mar 07 09:05:48 crc kubenswrapper[4761]: E0307 09:05:48.055750 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de\": container with ID starting with f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de not found: ID does not exist" containerID="f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.055792 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de"} err="failed to get container status \"f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de\": rpc error: code = NotFound desc = could not find container \"f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de\": container with ID starting with f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de not found: ID does not exist" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.055814 4761 scope.go:117] "RemoveContainer" containerID="4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120" Mar 07 09:05:48 crc kubenswrapper[4761]: E0307 09:05:48.057069 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120\": container with ID starting with 4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120 not found: ID does not exist" containerID="4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.057119 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120"} err="failed to get container status \"4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120\": rpc error: code = NotFound desc = could not find container \"4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120\": container with ID starting with 4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120 not found: ID does not exist" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.076384 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4p5s\" (UniqueName: \"kubernetes.io/projected/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-kube-api-access-g4p5s\") on node \"crc\" DevicePath \"\"" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.076425 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.076437 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.301857 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqzm8"] Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.311686 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bqzm8"] Mar 07 09:05:49 crc kubenswrapper[4761]: I0307 09:05:49.721945 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" path="/var/lib/kubelet/pods/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6/volumes" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.153904 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547906-8npwc"] Mar 07 09:06:00 crc kubenswrapper[4761]: E0307 09:06:00.155286 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="registry-server" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.155310 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="registry-server" Mar 07 09:06:00 crc kubenswrapper[4761]: E0307 09:06:00.155365 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="registry-server" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.155378 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="registry-server" Mar 07 09:06:00 crc kubenswrapper[4761]: E0307 09:06:00.155408 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="extract-utilities" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.155425 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="extract-utilities" Mar 07 09:06:00 crc kubenswrapper[4761]: E0307 09:06:00.155450 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="extract-content" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.155462 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="extract-content" Mar 07 09:06:00 crc kubenswrapper[4761]: E0307 09:06:00.155504 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="extract-utilities" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.155516 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="extract-utilities" Mar 07 09:06:00 crc kubenswrapper[4761]: E0307 09:06:00.155540 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="extract-content" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.155552 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="extract-content" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.156031 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="registry-server" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.156065 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="registry-server" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.157356 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547906-8npwc" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.159391 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.163394 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.163670 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.173260 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqtzl\" (UniqueName: \"kubernetes.io/projected/e5baa6ec-91e1-4249-a7a5-89b76d419e4b-kube-api-access-wqtzl\") pod \"auto-csr-approver-29547906-8npwc\" (UID: \"e5baa6ec-91e1-4249-a7a5-89b76d419e4b\") " pod="openshift-infra/auto-csr-approver-29547906-8npwc" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.175227 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547906-8npwc"] Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.275701 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqtzl\" (UniqueName: \"kubernetes.io/projected/e5baa6ec-91e1-4249-a7a5-89b76d419e4b-kube-api-access-wqtzl\") pod \"auto-csr-approver-29547906-8npwc\" (UID: \"e5baa6ec-91e1-4249-a7a5-89b76d419e4b\") " pod="openshift-infra/auto-csr-approver-29547906-8npwc" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.298369 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqtzl\" (UniqueName: \"kubernetes.io/projected/e5baa6ec-91e1-4249-a7a5-89b76d419e4b-kube-api-access-wqtzl\") pod \"auto-csr-approver-29547906-8npwc\" (UID: \"e5baa6ec-91e1-4249-a7a5-89b76d419e4b\") " pod="openshift-infra/auto-csr-approver-29547906-8npwc" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.485660 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547906-8npwc" Mar 07 09:06:01 crc kubenswrapper[4761]: I0307 09:06:01.010562 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547906-8npwc"] Mar 07 09:06:01 crc kubenswrapper[4761]: I0307 09:06:01.094270 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547906-8npwc" event={"ID":"e5baa6ec-91e1-4249-a7a5-89b76d419e4b","Type":"ContainerStarted","Data":"a3dceace0c96b9c8e3bc362392b23aff7456094620913cb977e77195338b1e88"} Mar 07 09:06:03 crc kubenswrapper[4761]: I0307 09:06:03.118725 4761 generic.go:334] "Generic (PLEG): container finished" podID="e5baa6ec-91e1-4249-a7a5-89b76d419e4b" containerID="b7cef6fa5da4525dafd29a5b4cac6a2dd3cd39f62d9e7b5c1c0c0e186d1fce71" exitCode=0 Mar 07 09:06:03 crc kubenswrapper[4761]: I0307 09:06:03.119213 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547906-8npwc" event={"ID":"e5baa6ec-91e1-4249-a7a5-89b76d419e4b","Type":"ContainerDied","Data":"b7cef6fa5da4525dafd29a5b4cac6a2dd3cd39f62d9e7b5c1c0c0e186d1fce71"} Mar 07 09:06:04 crc kubenswrapper[4761]: I0307 09:06:04.614838 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547906-8npwc" Mar 07 09:06:04 crc kubenswrapper[4761]: I0307 09:06:04.783617 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqtzl\" (UniqueName: \"kubernetes.io/projected/e5baa6ec-91e1-4249-a7a5-89b76d419e4b-kube-api-access-wqtzl\") pod \"e5baa6ec-91e1-4249-a7a5-89b76d419e4b\" (UID: \"e5baa6ec-91e1-4249-a7a5-89b76d419e4b\") " Mar 07 09:06:04 crc kubenswrapper[4761]: I0307 09:06:04.789463 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5baa6ec-91e1-4249-a7a5-89b76d419e4b-kube-api-access-wqtzl" (OuterVolumeSpecName: "kube-api-access-wqtzl") pod "e5baa6ec-91e1-4249-a7a5-89b76d419e4b" (UID: "e5baa6ec-91e1-4249-a7a5-89b76d419e4b"). InnerVolumeSpecName "kube-api-access-wqtzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:06:04 crc kubenswrapper[4761]: I0307 09:06:04.889187 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqtzl\" (UniqueName: \"kubernetes.io/projected/e5baa6ec-91e1-4249-a7a5-89b76d419e4b-kube-api-access-wqtzl\") on node \"crc\" DevicePath \"\"" Mar 07 09:06:05 crc kubenswrapper[4761]: I0307 09:06:05.142912 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547906-8npwc" event={"ID":"e5baa6ec-91e1-4249-a7a5-89b76d419e4b","Type":"ContainerDied","Data":"a3dceace0c96b9c8e3bc362392b23aff7456094620913cb977e77195338b1e88"} Mar 07 09:06:05 crc kubenswrapper[4761]: I0307 09:06:05.142956 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3dceace0c96b9c8e3bc362392b23aff7456094620913cb977e77195338b1e88" Mar 07 09:06:05 crc kubenswrapper[4761]: I0307 09:06:05.142981 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547906-8npwc" Mar 07 09:06:05 crc kubenswrapper[4761]: I0307 09:06:05.738474 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547900-4mplz"] Mar 07 09:06:05 crc kubenswrapper[4761]: I0307 09:06:05.769526 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547900-4mplz"] Mar 07 09:06:07 crc kubenswrapper[4761]: I0307 09:06:07.725500 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d76aff1d-3203-40ca-831e-c2628cc785e5" path="/var/lib/kubelet/pods/d76aff1d-3203-40ca-831e-c2628cc785e5/volumes" Mar 07 09:06:13 crc kubenswrapper[4761]: I0307 09:06:13.768142 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:06:13 crc kubenswrapper[4761]: I0307 09:06:13.768777 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:06:22 crc kubenswrapper[4761]: I0307 09:06:22.710851 4761 scope.go:117] "RemoveContainer" containerID="deae858385797443c973dbee5b25daad37118b53b46aef06b53d89d1a40ab719" Mar 07 09:06:43 crc kubenswrapper[4761]: I0307 09:06:43.768884 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:06:43 crc kubenswrapper[4761]: I0307 09:06:43.769629 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:06:43 crc kubenswrapper[4761]: I0307 09:06:43.769683 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 09:06:43 crc kubenswrapper[4761]: I0307 09:06:43.770631 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:06:43 crc kubenswrapper[4761]: I0307 09:06:43.770693 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" gracePeriod=600 Mar 07 09:06:43 crc kubenswrapper[4761]: E0307 09:06:43.896616 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:06:44 crc kubenswrapper[4761]: I0307 09:06:44.620782 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" exitCode=0 Mar 07 09:06:44 crc kubenswrapper[4761]: I0307 09:06:44.620847 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823"} Mar 07 09:06:44 crc kubenswrapper[4761]: I0307 09:06:44.621115 4761 scope.go:117] "RemoveContainer" containerID="7f80f30dd74ec8eec9d0d65df5727221eb321d5c597633536f2b7fa2b1d20fb6" Mar 07 09:06:44 crc kubenswrapper[4761]: I0307 09:06:44.621794 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:06:44 crc kubenswrapper[4761]: E0307 09:06:44.622094 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:06:58 crc kubenswrapper[4761]: I0307 09:06:58.705428 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:06:58 crc kubenswrapper[4761]: E0307 09:06:58.706221 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:07:10 crc kubenswrapper[4761]: I0307 09:07:10.706314 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:07:10 crc kubenswrapper[4761]: E0307 09:07:10.707375 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:07:23 crc kubenswrapper[4761]: I0307 09:07:23.723955 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:07:23 crc kubenswrapper[4761]: E0307 09:07:23.724771 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:07:34 crc kubenswrapper[4761]: I0307 09:07:34.705566 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:07:34 crc kubenswrapper[4761]: E0307 09:07:34.706354 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:07:45 crc kubenswrapper[4761]: I0307 09:07:45.706725 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:07:45 crc kubenswrapper[4761]: E0307 09:07:45.707707 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:07:57 crc kubenswrapper[4761]: I0307 09:07:57.711061 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:07:57 crc kubenswrapper[4761]: E0307 09:07:57.711973 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.146335 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547908-djgzj"] Mar 07 09:08:00 crc kubenswrapper[4761]: E0307 09:08:00.147857 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5baa6ec-91e1-4249-a7a5-89b76d419e4b" containerName="oc" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.147881 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5baa6ec-91e1-4249-a7a5-89b76d419e4b" containerName="oc" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.148310 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5baa6ec-91e1-4249-a7a5-89b76d419e4b" containerName="oc" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.149706 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547908-djgzj" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.152443 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.152882 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.153998 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.157527 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547908-djgzj"] Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.351864 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzs6\" (UniqueName: \"kubernetes.io/projected/8d3b9b36-b295-4d46-8ac4-c53634b7fd31-kube-api-access-sxzs6\") pod \"auto-csr-approver-29547908-djgzj\" (UID: \"8d3b9b36-b295-4d46-8ac4-c53634b7fd31\") " pod="openshift-infra/auto-csr-approver-29547908-djgzj" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.454932 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxzs6\" (UniqueName: \"kubernetes.io/projected/8d3b9b36-b295-4d46-8ac4-c53634b7fd31-kube-api-access-sxzs6\") pod \"auto-csr-approver-29547908-djgzj\" (UID: \"8d3b9b36-b295-4d46-8ac4-c53634b7fd31\") " pod="openshift-infra/auto-csr-approver-29547908-djgzj" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.475348 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxzs6\" (UniqueName: \"kubernetes.io/projected/8d3b9b36-b295-4d46-8ac4-c53634b7fd31-kube-api-access-sxzs6\") pod \"auto-csr-approver-29547908-djgzj\" (UID: \"8d3b9b36-b295-4d46-8ac4-c53634b7fd31\") " pod="openshift-infra/auto-csr-approver-29547908-djgzj" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.483573 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547908-djgzj" Mar 07 09:08:00 crc kubenswrapper[4761]: W0307 09:08:00.960397 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d3b9b36_b295_4d46_8ac4_c53634b7fd31.slice/crio-4677d10797f85c2aabe887c90c25424578375761465b541e946c069515d85af7 WatchSource:0}: Error finding container 4677d10797f85c2aabe887c90c25424578375761465b541e946c069515d85af7: Status 404 returned error can't find the container with id 4677d10797f85c2aabe887c90c25424578375761465b541e946c069515d85af7 Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.965108 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547908-djgzj"] Mar 07 09:08:01 crc kubenswrapper[4761]: I0307 09:08:01.553610 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547908-djgzj" event={"ID":"8d3b9b36-b295-4d46-8ac4-c53634b7fd31","Type":"ContainerStarted","Data":"4677d10797f85c2aabe887c90c25424578375761465b541e946c069515d85af7"} Mar 07 09:08:02 crc kubenswrapper[4761]: I0307 09:08:02.566210 4761 generic.go:334] "Generic (PLEG): container finished" podID="8d3b9b36-b295-4d46-8ac4-c53634b7fd31" containerID="08fb1919b7c18d41d8bdce3c775ce0c19a509988ae99655b98ed10fa4e5ccf1f" exitCode=0 Mar 07 09:08:02 crc kubenswrapper[4761]: I0307 09:08:02.566311 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547908-djgzj" event={"ID":"8d3b9b36-b295-4d46-8ac4-c53634b7fd31","Type":"ContainerDied","Data":"08fb1919b7c18d41d8bdce3c775ce0c19a509988ae99655b98ed10fa4e5ccf1f"} Mar 07 09:08:04 crc kubenswrapper[4761]: I0307 09:08:04.120023 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547908-djgzj" Mar 07 09:08:04 crc kubenswrapper[4761]: I0307 09:08:04.161288 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxzs6\" (UniqueName: \"kubernetes.io/projected/8d3b9b36-b295-4d46-8ac4-c53634b7fd31-kube-api-access-sxzs6\") pod \"8d3b9b36-b295-4d46-8ac4-c53634b7fd31\" (UID: \"8d3b9b36-b295-4d46-8ac4-c53634b7fd31\") " Mar 07 09:08:04 crc kubenswrapper[4761]: I0307 09:08:04.193210 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d3b9b36-b295-4d46-8ac4-c53634b7fd31-kube-api-access-sxzs6" (OuterVolumeSpecName: "kube-api-access-sxzs6") pod "8d3b9b36-b295-4d46-8ac4-c53634b7fd31" (UID: "8d3b9b36-b295-4d46-8ac4-c53634b7fd31"). InnerVolumeSpecName "kube-api-access-sxzs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:08:04 crc kubenswrapper[4761]: I0307 09:08:04.264185 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxzs6\" (UniqueName: \"kubernetes.io/projected/8d3b9b36-b295-4d46-8ac4-c53634b7fd31-kube-api-access-sxzs6\") on node \"crc\" DevicePath \"\"" Mar 07 09:08:04 crc kubenswrapper[4761]: I0307 09:08:04.598208 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547908-djgzj" event={"ID":"8d3b9b36-b295-4d46-8ac4-c53634b7fd31","Type":"ContainerDied","Data":"4677d10797f85c2aabe887c90c25424578375761465b541e946c069515d85af7"} Mar 07 09:08:04 crc kubenswrapper[4761]: I0307 09:08:04.598255 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4677d10797f85c2aabe887c90c25424578375761465b541e946c069515d85af7" Mar 07 09:08:04 crc kubenswrapper[4761]: I0307 09:08:04.598319 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547908-djgzj" Mar 07 09:08:05 crc kubenswrapper[4761]: I0307 09:08:05.222686 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547902-2ntd7"] Mar 07 09:08:05 crc kubenswrapper[4761]: I0307 09:08:05.235105 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547902-2ntd7"] Mar 07 09:08:05 crc kubenswrapper[4761]: I0307 09:08:05.727456 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bc7c313-fe46-4bb6-ac32-7b2e93f98c63" path="/var/lib/kubelet/pods/8bc7c313-fe46-4bb6-ac32-7b2e93f98c63/volumes" Mar 07 09:08:12 crc kubenswrapper[4761]: I0307 09:08:12.708338 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:08:12 crc kubenswrapper[4761]: E0307 09:08:12.709224 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:08:23 crc kubenswrapper[4761]: I0307 09:08:23.382393 4761 scope.go:117] "RemoveContainer" containerID="6eafd66faadd47449571d768d116121ea726b6ad2cc3e26ec17adb107e02d96b" Mar 07 09:08:25 crc kubenswrapper[4761]: I0307 09:08:25.706521 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:08:25 crc kubenswrapper[4761]: E0307 09:08:25.707505 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:08:40 crc kubenswrapper[4761]: I0307 09:08:40.706212 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:08:40 crc kubenswrapper[4761]: E0307 09:08:40.706880 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:08:52 crc kubenswrapper[4761]: I0307 09:08:52.707818 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:08:52 crc kubenswrapper[4761]: E0307 09:08:52.708676 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:09:04 crc kubenswrapper[4761]: I0307 09:09:04.706750 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:09:04 crc kubenswrapper[4761]: E0307 09:09:04.707544 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:09:17 crc kubenswrapper[4761]: I0307 09:09:17.714451 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:09:17 crc kubenswrapper[4761]: E0307 09:09:17.715357 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:09:28 crc kubenswrapper[4761]: I0307 09:09:28.708294 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:09:28 crc kubenswrapper[4761]: E0307 09:09:28.709206 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.631142 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p4pmn"] Mar 07 09:09:34 crc kubenswrapper[4761]: E0307 09:09:34.632211 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d3b9b36-b295-4d46-8ac4-c53634b7fd31" containerName="oc" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.632228 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d3b9b36-b295-4d46-8ac4-c53634b7fd31" containerName="oc" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.632536 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d3b9b36-b295-4d46-8ac4-c53634b7fd31" containerName="oc" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.634696 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.664936 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p4pmn"] Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.668316 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-utilities\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.668361 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-catalog-content\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.668390 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r26kv\" (UniqueName: \"kubernetes.io/projected/8984806c-345c-44e0-afcb-6840f2a9cd5b-kube-api-access-r26kv\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.771642 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-utilities\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.771763 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-catalog-content\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.771811 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r26kv\" (UniqueName: \"kubernetes.io/projected/8984806c-345c-44e0-afcb-6840f2a9cd5b-kube-api-access-r26kv\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.773095 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-utilities\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.774174 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-catalog-content\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.806556 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r26kv\" (UniqueName: \"kubernetes.io/projected/8984806c-345c-44e0-afcb-6840f2a9cd5b-kube-api-access-r26kv\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.965731 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:35 crc kubenswrapper[4761]: I0307 09:09:35.494015 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p4pmn"] Mar 07 09:09:35 crc kubenswrapper[4761]: I0307 09:09:35.528327 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4pmn" event={"ID":"8984806c-345c-44e0-afcb-6840f2a9cd5b","Type":"ContainerStarted","Data":"a889e64c97f59698f5e456831e4dddfb3c62bf982e0f6734f0a8adf1561c6b49"} Mar 07 09:09:36 crc kubenswrapper[4761]: I0307 09:09:36.537745 4761 generic.go:334] "Generic (PLEG): container finished" podID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerID="47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61" exitCode=0 Mar 07 09:09:36 crc kubenswrapper[4761]: I0307 09:09:36.537816 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4pmn" event={"ID":"8984806c-345c-44e0-afcb-6840f2a9cd5b","Type":"ContainerDied","Data":"47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61"} Mar 07 09:09:36 crc kubenswrapper[4761]: I0307 09:09:36.540120 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:09:38 crc kubenswrapper[4761]: I0307 09:09:38.571868 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4pmn" event={"ID":"8984806c-345c-44e0-afcb-6840f2a9cd5b","Type":"ContainerStarted","Data":"0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176"} Mar 07 09:09:40 crc kubenswrapper[4761]: I0307 09:09:40.706057 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:09:40 crc kubenswrapper[4761]: E0307 09:09:40.706607 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:09:44 crc kubenswrapper[4761]: I0307 09:09:44.648034 4761 generic.go:334] "Generic (PLEG): container finished" podID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerID="0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176" exitCode=0 Mar 07 09:09:44 crc kubenswrapper[4761]: I0307 09:09:44.648467 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4pmn" event={"ID":"8984806c-345c-44e0-afcb-6840f2a9cd5b","Type":"ContainerDied","Data":"0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176"} Mar 07 09:09:45 crc kubenswrapper[4761]: I0307 09:09:45.661709 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4pmn" event={"ID":"8984806c-345c-44e0-afcb-6840f2a9cd5b","Type":"ContainerStarted","Data":"f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778"} Mar 07 09:09:45 crc kubenswrapper[4761]: I0307 09:09:45.685888 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p4pmn" podStartSLOduration=3.110033921 podStartE2EDuration="11.685870774s" podCreationTimestamp="2026-03-07 09:09:34 +0000 UTC" firstStartedPulling="2026-03-07 09:09:36.539868328 +0000 UTC m=+4833.449034803" lastFinishedPulling="2026-03-07 09:09:45.115705181 +0000 UTC m=+4842.024871656" observedRunningTime="2026-03-07 09:09:45.684247495 +0000 UTC m=+4842.593413970" watchObservedRunningTime="2026-03-07 09:09:45.685870774 +0000 UTC m=+4842.595037249" Mar 07 09:09:52 crc kubenswrapper[4761]: I0307 09:09:52.706526 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:09:52 crc kubenswrapper[4761]: E0307 09:09:52.707384 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:09:54 crc kubenswrapper[4761]: I0307 09:09:54.966843 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:54 crc kubenswrapper[4761]: I0307 09:09:54.967504 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:56 crc kubenswrapper[4761]: I0307 09:09:56.027188 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p4pmn" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="registry-server" probeResult="failure" output=< Mar 07 09:09:56 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:09:56 crc kubenswrapper[4761]: > Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.152005 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547910-tp9pj"] Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.154643 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.160230 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.160552 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.160984 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.171607 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547910-tp9pj"] Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.192289 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfv26\" (UniqueName: \"kubernetes.io/projected/77c8bd54-9347-4e87-bd44-76913cb2a3f6-kube-api-access-hfv26\") pod \"auto-csr-approver-29547910-tp9pj\" (UID: \"77c8bd54-9347-4e87-bd44-76913cb2a3f6\") " pod="openshift-infra/auto-csr-approver-29547910-tp9pj" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.295236 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfv26\" (UniqueName: \"kubernetes.io/projected/77c8bd54-9347-4e87-bd44-76913cb2a3f6-kube-api-access-hfv26\") pod \"auto-csr-approver-29547910-tp9pj\" (UID: \"77c8bd54-9347-4e87-bd44-76913cb2a3f6\") " pod="openshift-infra/auto-csr-approver-29547910-tp9pj" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.320110 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfv26\" (UniqueName: \"kubernetes.io/projected/77c8bd54-9347-4e87-bd44-76913cb2a3f6-kube-api-access-hfv26\") pod \"auto-csr-approver-29547910-tp9pj\" (UID: \"77c8bd54-9347-4e87-bd44-76913cb2a3f6\") " pod="openshift-infra/auto-csr-approver-29547910-tp9pj" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.475087 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.983354 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547910-tp9pj"] Mar 07 09:10:01 crc kubenswrapper[4761]: I0307 09:10:01.859574 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" event={"ID":"77c8bd54-9347-4e87-bd44-76913cb2a3f6","Type":"ContainerStarted","Data":"6ad86775423df5083052684aff99a4d317d806c727d6f896c3ec16eb411f40fe"} Mar 07 09:10:02 crc kubenswrapper[4761]: I0307 09:10:02.879046 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" event={"ID":"77c8bd54-9347-4e87-bd44-76913cb2a3f6","Type":"ContainerStarted","Data":"258a09f7752f77736f7c79cc137d75713f6d9a437375f7b59cbe63159e498518"} Mar 07 09:10:02 crc kubenswrapper[4761]: I0307 09:10:02.929604 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" podStartSLOduration=1.926845266 podStartE2EDuration="2.929576097s" podCreationTimestamp="2026-03-07 09:10:00 +0000 UTC" firstStartedPulling="2026-03-07 09:10:00.990793782 +0000 UTC m=+4857.899960267" lastFinishedPulling="2026-03-07 09:10:01.993524583 +0000 UTC m=+4858.902691098" observedRunningTime="2026-03-07 09:10:02.896112806 +0000 UTC m=+4859.805279281" watchObservedRunningTime="2026-03-07 09:10:02.929576097 +0000 UTC m=+4859.838742582" Mar 07 09:10:03 crc kubenswrapper[4761]: I0307 09:10:03.893273 4761 generic.go:334] "Generic (PLEG): container finished" podID="77c8bd54-9347-4e87-bd44-76913cb2a3f6" containerID="258a09f7752f77736f7c79cc137d75713f6d9a437375f7b59cbe63159e498518" exitCode=0 Mar 07 09:10:03 crc kubenswrapper[4761]: I0307 09:10:03.894818 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" event={"ID":"77c8bd54-9347-4e87-bd44-76913cb2a3f6","Type":"ContainerDied","Data":"258a09f7752f77736f7c79cc137d75713f6d9a437375f7b59cbe63159e498518"} Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.422278 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.445075 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfv26\" (UniqueName: \"kubernetes.io/projected/77c8bd54-9347-4e87-bd44-76913cb2a3f6-kube-api-access-hfv26\") pod \"77c8bd54-9347-4e87-bd44-76913cb2a3f6\" (UID: \"77c8bd54-9347-4e87-bd44-76913cb2a3f6\") " Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.455443 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77c8bd54-9347-4e87-bd44-76913cb2a3f6-kube-api-access-hfv26" (OuterVolumeSpecName: "kube-api-access-hfv26") pod "77c8bd54-9347-4e87-bd44-76913cb2a3f6" (UID: "77c8bd54-9347-4e87-bd44-76913cb2a3f6"). InnerVolumeSpecName "kube-api-access-hfv26". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.548755 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfv26\" (UniqueName: \"kubernetes.io/projected/77c8bd54-9347-4e87-bd44-76913cb2a3f6-kube-api-access-hfv26\") on node \"crc\" DevicePath \"\"" Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.924089 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" event={"ID":"77c8bd54-9347-4e87-bd44-76913cb2a3f6","Type":"ContainerDied","Data":"6ad86775423df5083052684aff99a4d317d806c727d6f896c3ec16eb411f40fe"} Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.924131 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ad86775423df5083052684aff99a4d317d806c727d6f896c3ec16eb411f40fe" Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.924182 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.994082 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547904-tqsph"] Mar 07 09:10:06 crc kubenswrapper[4761]: I0307 09:10:06.008442 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547904-tqsph"] Mar 07 09:10:06 crc kubenswrapper[4761]: I0307 09:10:06.353494 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p4pmn" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="registry-server" probeResult="failure" output=< Mar 07 09:10:06 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:10:06 crc kubenswrapper[4761]: > Mar 07 09:10:06 crc kubenswrapper[4761]: I0307 09:10:06.707556 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:10:06 crc kubenswrapper[4761]: E0307 09:10:06.708230 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:10:07 crc kubenswrapper[4761]: I0307 09:10:07.725367 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9" path="/var/lib/kubelet/pods/3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9/volumes" Mar 07 09:10:15 crc kubenswrapper[4761]: I0307 09:10:15.015590 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:10:15 crc kubenswrapper[4761]: I0307 09:10:15.068567 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:10:15 crc kubenswrapper[4761]: I0307 09:10:15.288820 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p4pmn"] Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.064562 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p4pmn" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="registry-server" containerID="cri-o://f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778" gracePeriod=2 Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.694751 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.880855 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-utilities\") pod \"8984806c-345c-44e0-afcb-6840f2a9cd5b\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.881241 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r26kv\" (UniqueName: \"kubernetes.io/projected/8984806c-345c-44e0-afcb-6840f2a9cd5b-kube-api-access-r26kv\") pod \"8984806c-345c-44e0-afcb-6840f2a9cd5b\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.881272 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-catalog-content\") pod \"8984806c-345c-44e0-afcb-6840f2a9cd5b\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.882320 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-utilities" (OuterVolumeSpecName: "utilities") pod "8984806c-345c-44e0-afcb-6840f2a9cd5b" (UID: "8984806c-345c-44e0-afcb-6840f2a9cd5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.895426 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8984806c-345c-44e0-afcb-6840f2a9cd5b-kube-api-access-r26kv" (OuterVolumeSpecName: "kube-api-access-r26kv") pod "8984806c-345c-44e0-afcb-6840f2a9cd5b" (UID: "8984806c-345c-44e0-afcb-6840f2a9cd5b"). InnerVolumeSpecName "kube-api-access-r26kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.984121 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.984167 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r26kv\" (UniqueName: \"kubernetes.io/projected/8984806c-345c-44e0-afcb-6840f2a9cd5b-kube-api-access-r26kv\") on node \"crc\" DevicePath \"\"" Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.987670 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8984806c-345c-44e0-afcb-6840f2a9cd5b" (UID: "8984806c-345c-44e0-afcb-6840f2a9cd5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.087732 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.088583 4761 generic.go:334] "Generic (PLEG): container finished" podID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerID="f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778" exitCode=0 Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.088627 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4pmn" event={"ID":"8984806c-345c-44e0-afcb-6840f2a9cd5b","Type":"ContainerDied","Data":"f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778"} Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.088659 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4pmn" event={"ID":"8984806c-345c-44e0-afcb-6840f2a9cd5b","Type":"ContainerDied","Data":"a889e64c97f59698f5e456831e4dddfb3c62bf982e0f6734f0a8adf1561c6b49"} Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.088666 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.088679 4761 scope.go:117] "RemoveContainer" containerID="f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.145968 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p4pmn"] Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.158512 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p4pmn"] Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.162934 4761 scope.go:117] "RemoveContainer" containerID="0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.254976 4761 scope.go:117] "RemoveContainer" containerID="47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.358676 4761 scope.go:117] "RemoveContainer" containerID="f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778" Mar 07 09:10:18 crc kubenswrapper[4761]: E0307 09:10:18.359476 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778\": container with ID starting with f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778 not found: ID does not exist" containerID="f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.359766 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778"} err="failed to get container status \"f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778\": rpc error: code = NotFound desc = could not find container \"f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778\": container with ID starting with f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778 not found: ID does not exist" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.359913 4761 scope.go:117] "RemoveContainer" containerID="0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176" Mar 07 09:10:18 crc kubenswrapper[4761]: E0307 09:10:18.360236 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176\": container with ID starting with 0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176 not found: ID does not exist" containerID="0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.360309 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176"} err="failed to get container status \"0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176\": rpc error: code = NotFound desc = could not find container \"0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176\": container with ID starting with 0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176 not found: ID does not exist" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.360335 4761 scope.go:117] "RemoveContainer" containerID="47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61" Mar 07 09:10:18 crc kubenswrapper[4761]: E0307 09:10:18.360637 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61\": container with ID starting with 47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61 not found: ID does not exist" containerID="47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.360685 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61"} err="failed to get container status \"47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61\": rpc error: code = NotFound desc = could not find container \"47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61\": container with ID starting with 47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61 not found: ID does not exist" Mar 07 09:10:19 crc kubenswrapper[4761]: I0307 09:10:19.717580 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" path="/var/lib/kubelet/pods/8984806c-345c-44e0-afcb-6840f2a9cd5b/volumes" Mar 07 09:10:21 crc kubenswrapper[4761]: I0307 09:10:21.706369 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:10:21 crc kubenswrapper[4761]: E0307 09:10:21.706940 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:10:23 crc kubenswrapper[4761]: I0307 09:10:23.503579 4761 scope.go:117] "RemoveContainer" containerID="751e663a953c9621b6f7e8bbf8ccfd3bd89e09b8c2183de735e7eaa4b9bffba7" Mar 07 09:10:33 crc kubenswrapper[4761]: I0307 09:10:33.715313 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:10:33 crc kubenswrapper[4761]: E0307 09:10:33.716077 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:10:47 crc kubenswrapper[4761]: I0307 09:10:47.706627 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:10:47 crc kubenswrapper[4761]: E0307 09:10:47.707836 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:11:01 crc kubenswrapper[4761]: I0307 09:11:01.707261 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:11:01 crc kubenswrapper[4761]: E0307 09:11:01.708384 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:11:16 crc kubenswrapper[4761]: I0307 09:11:16.707102 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:11:16 crc kubenswrapper[4761]: E0307 09:11:16.707984 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:11:29 crc kubenswrapper[4761]: I0307 09:11:29.707554 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:11:29 crc kubenswrapper[4761]: E0307 09:11:29.708833 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:11:40 crc kubenswrapper[4761]: I0307 09:11:40.708452 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:11:40 crc kubenswrapper[4761]: E0307 09:11:40.709954 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:11:52 crc kubenswrapper[4761]: I0307 09:11:52.706337 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:11:53 crc kubenswrapper[4761]: I0307 09:11:53.259883 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"45493895bc908f690bc18f8d9a3f4e9f36cdf8af714be35170fe2ff42764c391"} Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.158257 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547912-49bh4"] Mar 07 09:12:00 crc kubenswrapper[4761]: E0307 09:12:00.160975 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="registry-server" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.161136 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="registry-server" Mar 07 09:12:00 crc kubenswrapper[4761]: E0307 09:12:00.161189 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="extract-utilities" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.161198 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="extract-utilities" Mar 07 09:12:00 crc kubenswrapper[4761]: E0307 09:12:00.161222 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c8bd54-9347-4e87-bd44-76913cb2a3f6" containerName="oc" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.161231 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c8bd54-9347-4e87-bd44-76913cb2a3f6" containerName="oc" Mar 07 09:12:00 crc kubenswrapper[4761]: E0307 09:12:00.161261 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="extract-content" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.161269 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="extract-content" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.162292 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="registry-server" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.162478 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="77c8bd54-9347-4e87-bd44-76913cb2a3f6" containerName="oc" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.164111 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547912-49bh4" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.166685 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.166919 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.167067 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.190602 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547912-49bh4"] Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.270001 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g6mv\" (UniqueName: \"kubernetes.io/projected/24a7900d-f79e-4ea3-92bb-9d0af09ee62f-kube-api-access-9g6mv\") pod \"auto-csr-approver-29547912-49bh4\" (UID: \"24a7900d-f79e-4ea3-92bb-9d0af09ee62f\") " pod="openshift-infra/auto-csr-approver-29547912-49bh4" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.373487 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g6mv\" (UniqueName: \"kubernetes.io/projected/24a7900d-f79e-4ea3-92bb-9d0af09ee62f-kube-api-access-9g6mv\") pod \"auto-csr-approver-29547912-49bh4\" (UID: \"24a7900d-f79e-4ea3-92bb-9d0af09ee62f\") " pod="openshift-infra/auto-csr-approver-29547912-49bh4" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.439810 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g6mv\" (UniqueName: \"kubernetes.io/projected/24a7900d-f79e-4ea3-92bb-9d0af09ee62f-kube-api-access-9g6mv\") pod \"auto-csr-approver-29547912-49bh4\" (UID: \"24a7900d-f79e-4ea3-92bb-9d0af09ee62f\") " pod="openshift-infra/auto-csr-approver-29547912-49bh4" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.621824 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547912-49bh4" Mar 07 09:12:01 crc kubenswrapper[4761]: I0307 09:12:01.188994 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547912-49bh4"] Mar 07 09:12:01 crc kubenswrapper[4761]: W0307 09:12:01.322866 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24a7900d_f79e_4ea3_92bb_9d0af09ee62f.slice/crio-d0e095c50af99f4a1ee2741f7f045a69f87569588e5683709a523ee74f25e34e WatchSource:0}: Error finding container d0e095c50af99f4a1ee2741f7f045a69f87569588e5683709a523ee74f25e34e: Status 404 returned error can't find the container with id d0e095c50af99f4a1ee2741f7f045a69f87569588e5683709a523ee74f25e34e Mar 07 09:12:01 crc kubenswrapper[4761]: I0307 09:12:01.350949 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547912-49bh4" event={"ID":"24a7900d-f79e-4ea3-92bb-9d0af09ee62f","Type":"ContainerStarted","Data":"d0e095c50af99f4a1ee2741f7f045a69f87569588e5683709a523ee74f25e34e"} Mar 07 09:12:04 crc kubenswrapper[4761]: I0307 09:12:04.385414 4761 generic.go:334] "Generic (PLEG): container finished" podID="24a7900d-f79e-4ea3-92bb-9d0af09ee62f" containerID="6f37c673145b5f0b43da9649c88fc1a3229b7c0a257204cae85e333d5b0607d1" exitCode=0 Mar 07 09:12:04 crc kubenswrapper[4761]: I0307 09:12:04.385504 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547912-49bh4" event={"ID":"24a7900d-f79e-4ea3-92bb-9d0af09ee62f","Type":"ContainerDied","Data":"6f37c673145b5f0b43da9649c88fc1a3229b7c0a257204cae85e333d5b0607d1"} Mar 07 09:12:05 crc kubenswrapper[4761]: I0307 09:12:05.958879 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547912-49bh4" Mar 07 09:12:06 crc kubenswrapper[4761]: I0307 09:12:06.142433 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g6mv\" (UniqueName: \"kubernetes.io/projected/24a7900d-f79e-4ea3-92bb-9d0af09ee62f-kube-api-access-9g6mv\") pod \"24a7900d-f79e-4ea3-92bb-9d0af09ee62f\" (UID: \"24a7900d-f79e-4ea3-92bb-9d0af09ee62f\") " Mar 07 09:12:06 crc kubenswrapper[4761]: I0307 09:12:06.158033 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a7900d-f79e-4ea3-92bb-9d0af09ee62f-kube-api-access-9g6mv" (OuterVolumeSpecName: "kube-api-access-9g6mv") pod "24a7900d-f79e-4ea3-92bb-9d0af09ee62f" (UID: "24a7900d-f79e-4ea3-92bb-9d0af09ee62f"). InnerVolumeSpecName "kube-api-access-9g6mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:12:06 crc kubenswrapper[4761]: I0307 09:12:06.246295 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g6mv\" (UniqueName: \"kubernetes.io/projected/24a7900d-f79e-4ea3-92bb-9d0af09ee62f-kube-api-access-9g6mv\") on node \"crc\" DevicePath \"\"" Mar 07 09:12:06 crc kubenswrapper[4761]: I0307 09:12:06.422737 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547912-49bh4" event={"ID":"24a7900d-f79e-4ea3-92bb-9d0af09ee62f","Type":"ContainerDied","Data":"d0e095c50af99f4a1ee2741f7f045a69f87569588e5683709a523ee74f25e34e"} Mar 07 09:12:06 crc kubenswrapper[4761]: I0307 09:12:06.422776 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0e095c50af99f4a1ee2741f7f045a69f87569588e5683709a523ee74f25e34e" Mar 07 09:12:06 crc kubenswrapper[4761]: I0307 09:12:06.422855 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547912-49bh4" Mar 07 09:12:07 crc kubenswrapper[4761]: I0307 09:12:07.021970 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547906-8npwc"] Mar 07 09:12:07 crc kubenswrapper[4761]: I0307 09:12:07.039456 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547906-8npwc"] Mar 07 09:12:07 crc kubenswrapper[4761]: I0307 09:12:07.719146 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5baa6ec-91e1-4249-a7a5-89b76d419e4b" path="/var/lib/kubelet/pods/e5baa6ec-91e1-4249-a7a5-89b76d419e4b/volumes" Mar 07 09:12:23 crc kubenswrapper[4761]: I0307 09:12:23.624491 4761 scope.go:117] "RemoveContainer" containerID="b7cef6fa5da4525dafd29a5b4cac6a2dd3cd39f62d9e7b5c1c0c0e186d1fce71" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.747113 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Mar 07 09:13:18 crc kubenswrapper[4761]: E0307 09:13:18.749878 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a7900d-f79e-4ea3-92bb-9d0af09ee62f" containerName="oc" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.750037 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a7900d-f79e-4ea3-92bb-9d0af09ee62f" containerName="oc" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.750666 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a7900d-f79e-4ea3-92bb-9d0af09ee62f" containerName="oc" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.752191 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.755419 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.755675 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pgk27" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.756005 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.756219 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.760974 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.789925 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-config-data\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.790017 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.790061 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.892365 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.892475 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.892521 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-config-data\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.893186 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.893426 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.893526 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.893645 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.893727 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgjsd\" (UniqueName: \"kubernetes.io/projected/cf1a0263-2849-4fc3-a733-eebca0481aae-kube-api-access-cgjsd\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.893873 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.894590 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-config-data\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.894921 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.899192 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.996573 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.996687 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.996753 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.996822 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.996858 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.996881 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgjsd\" (UniqueName: \"kubernetes.io/projected/cf1a0263-2849-4fc3-a733-eebca0481aae-kube-api-access-cgjsd\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.997197 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.997299 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.999177 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Mar 07 09:13:19 crc kubenswrapper[4761]: I0307 09:13:19.000391 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:19 crc kubenswrapper[4761]: I0307 09:13:19.001130 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:19 crc kubenswrapper[4761]: I0307 09:13:19.017647 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgjsd\" (UniqueName: \"kubernetes.io/projected/cf1a0263-2849-4fc3-a733-eebca0481aae-kube-api-access-cgjsd\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:19 crc kubenswrapper[4761]: I0307 09:13:19.078934 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:19 crc kubenswrapper[4761]: I0307 09:13:19.372775 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 07 09:13:19 crc kubenswrapper[4761]: W0307 09:13:19.937460 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf1a0263_2849_4fc3_a733_eebca0481aae.slice/crio-c89bee12915462bdfe58e59bd9de049cee18dd7969563e4c2bcb042bce31866a WatchSource:0}: Error finding container c89bee12915462bdfe58e59bd9de049cee18dd7969563e4c2bcb042bce31866a: Status 404 returned error can't find the container with id c89bee12915462bdfe58e59bd9de049cee18dd7969563e4c2bcb042bce31866a Mar 07 09:13:19 crc kubenswrapper[4761]: I0307 09:13:19.939212 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 07 09:13:20 crc kubenswrapper[4761]: I0307 09:13:20.641537 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"cf1a0263-2849-4fc3-a733-eebca0481aae","Type":"ContainerStarted","Data":"c89bee12915462bdfe58e59bd9de049cee18dd7969563e4c2bcb042bce31866a"} Mar 07 09:13:56 crc kubenswrapper[4761]: E0307 09:13:56.291857 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Mar 07 09:13:56 crc kubenswrapper[4761]: E0307 09:13:56.321146 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgjsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(cf1a0263-2849-4fc3-a733-eebca0481aae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 09:13:56 crc kubenswrapper[4761]: E0307 09:13:56.322805 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="cf1a0263-2849-4fc3-a733-eebca0481aae" Mar 07 09:13:57 crc kubenswrapper[4761]: E0307 09:13:57.074293 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="cf1a0263-2849-4fc3-a733-eebca0481aae" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.269494 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547914-mnrtr"] Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.271842 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.274434 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.274741 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.274895 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.280216 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547914-mnrtr"] Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.461509 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sffqx\" (UniqueName: \"kubernetes.io/projected/8285a2d6-1653-46b3-ac0e-481bf33fa2e0-kube-api-access-sffqx\") pod \"auto-csr-approver-29547914-mnrtr\" (UID: \"8285a2d6-1653-46b3-ac0e-481bf33fa2e0\") " pod="openshift-infra/auto-csr-approver-29547914-mnrtr" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.563443 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sffqx\" (UniqueName: \"kubernetes.io/projected/8285a2d6-1653-46b3-ac0e-481bf33fa2e0-kube-api-access-sffqx\") pod \"auto-csr-approver-29547914-mnrtr\" (UID: \"8285a2d6-1653-46b3-ac0e-481bf33fa2e0\") " pod="openshift-infra/auto-csr-approver-29547914-mnrtr" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.597769 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sffqx\" (UniqueName: \"kubernetes.io/projected/8285a2d6-1653-46b3-ac0e-481bf33fa2e0-kube-api-access-sffqx\") pod \"auto-csr-approver-29547914-mnrtr\" (UID: \"8285a2d6-1653-46b3-ac0e-481bf33fa2e0\") " pod="openshift-infra/auto-csr-approver-29547914-mnrtr" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.728074 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" Mar 07 09:14:01 crc kubenswrapper[4761]: I0307 09:14:01.489285 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547914-mnrtr"] Mar 07 09:14:02 crc kubenswrapper[4761]: I0307 09:14:02.214204 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" event={"ID":"8285a2d6-1653-46b3-ac0e-481bf33fa2e0","Type":"ContainerStarted","Data":"0e13678e7e126bb7f2f849f7c2af80b9c5f72c21bafc1a0d2933475ff129c839"} Mar 07 09:14:03 crc kubenswrapper[4761]: I0307 09:14:03.240967 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" event={"ID":"8285a2d6-1653-46b3-ac0e-481bf33fa2e0","Type":"ContainerStarted","Data":"eea4e62f70ef92b7ddd8ac5f32ab9d1a9b500bbe7bcace62e37bbaeaf124c8b6"} Mar 07 09:14:03 crc kubenswrapper[4761]: I0307 09:14:03.260362 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" podStartSLOduration=2.429486998 podStartE2EDuration="3.260344061s" podCreationTimestamp="2026-03-07 09:14:00 +0000 UTC" firstStartedPulling="2026-03-07 09:14:01.496386394 +0000 UTC m=+5098.405552879" lastFinishedPulling="2026-03-07 09:14:02.327243447 +0000 UTC m=+5099.236409942" observedRunningTime="2026-03-07 09:14:03.255099844 +0000 UTC m=+5100.164266319" watchObservedRunningTime="2026-03-07 09:14:03.260344061 +0000 UTC m=+5100.169510536" Mar 07 09:14:05 crc kubenswrapper[4761]: I0307 09:14:05.261211 4761 generic.go:334] "Generic (PLEG): container finished" podID="8285a2d6-1653-46b3-ac0e-481bf33fa2e0" containerID="eea4e62f70ef92b7ddd8ac5f32ab9d1a9b500bbe7bcace62e37bbaeaf124c8b6" exitCode=0 Mar 07 09:14:05 crc kubenswrapper[4761]: I0307 09:14:05.261309 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" event={"ID":"8285a2d6-1653-46b3-ac0e-481bf33fa2e0","Type":"ContainerDied","Data":"eea4e62f70ef92b7ddd8ac5f32ab9d1a9b500bbe7bcace62e37bbaeaf124c8b6"} Mar 07 09:14:06 crc kubenswrapper[4761]: I0307 09:14:06.674527 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" Mar 07 09:14:06 crc kubenswrapper[4761]: I0307 09:14:06.822287 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sffqx\" (UniqueName: \"kubernetes.io/projected/8285a2d6-1653-46b3-ac0e-481bf33fa2e0-kube-api-access-sffqx\") pod \"8285a2d6-1653-46b3-ac0e-481bf33fa2e0\" (UID: \"8285a2d6-1653-46b3-ac0e-481bf33fa2e0\") " Mar 07 09:14:06 crc kubenswrapper[4761]: I0307 09:14:06.840940 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8285a2d6-1653-46b3-ac0e-481bf33fa2e0-kube-api-access-sffqx" (OuterVolumeSpecName: "kube-api-access-sffqx") pod "8285a2d6-1653-46b3-ac0e-481bf33fa2e0" (UID: "8285a2d6-1653-46b3-ac0e-481bf33fa2e0"). InnerVolumeSpecName "kube-api-access-sffqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:14:06 crc kubenswrapper[4761]: I0307 09:14:06.925049 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sffqx\" (UniqueName: \"kubernetes.io/projected/8285a2d6-1653-46b3-ac0e-481bf33fa2e0-kube-api-access-sffqx\") on node \"crc\" DevicePath \"\"" Mar 07 09:14:07 crc kubenswrapper[4761]: I0307 09:14:07.288186 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" event={"ID":"8285a2d6-1653-46b3-ac0e-481bf33fa2e0","Type":"ContainerDied","Data":"0e13678e7e126bb7f2f849f7c2af80b9c5f72c21bafc1a0d2933475ff129c839"} Mar 07 09:14:07 crc kubenswrapper[4761]: I0307 09:14:07.288243 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e13678e7e126bb7f2f849f7c2af80b9c5f72c21bafc1a0d2933475ff129c839" Mar 07 09:14:07 crc kubenswrapper[4761]: I0307 09:14:07.288247 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" Mar 07 09:14:07 crc kubenswrapper[4761]: I0307 09:14:07.384995 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547908-djgzj"] Mar 07 09:14:07 crc kubenswrapper[4761]: I0307 09:14:07.399476 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547908-djgzj"] Mar 07 09:14:07 crc kubenswrapper[4761]: I0307 09:14:07.720291 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d3b9b36-b295-4d46-8ac4-c53634b7fd31" path="/var/lib/kubelet/pods/8d3b9b36-b295-4d46-8ac4-c53634b7fd31/volumes" Mar 07 09:14:11 crc kubenswrapper[4761]: I0307 09:14:11.261265 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 07 09:14:13 crc kubenswrapper[4761]: I0307 09:14:13.354601 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"cf1a0263-2849-4fc3-a733-eebca0481aae","Type":"ContainerStarted","Data":"3843e59e15646ab966087faa2ca0e895e0d384887c6b0a13b92f60562c3c3edb"} Mar 07 09:14:13 crc kubenswrapper[4761]: I0307 09:14:13.380759 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.063673043 podStartE2EDuration="56.380722636s" podCreationTimestamp="2026-03-07 09:13:17 +0000 UTC" firstStartedPulling="2026-03-07 09:13:19.94208649 +0000 UTC m=+5056.851252965" lastFinishedPulling="2026-03-07 09:14:11.259136083 +0000 UTC m=+5108.168302558" observedRunningTime="2026-03-07 09:14:13.370710378 +0000 UTC m=+5110.279876853" watchObservedRunningTime="2026-03-07 09:14:13.380722636 +0000 UTC m=+5110.289889111" Mar 07 09:14:13 crc kubenswrapper[4761]: I0307 09:14:13.767940 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:14:13 crc kubenswrapper[4761]: I0307 09:14:13.767997 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:14:23 crc kubenswrapper[4761]: I0307 09:14:23.792470 4761 scope.go:117] "RemoveContainer" containerID="08fb1919b7c18d41d8bdce3c775ce0c19a509988ae99655b98ed10fa4e5ccf1f" Mar 07 09:14:43 crc kubenswrapper[4761]: I0307 09:14:43.768090 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:14:43 crc kubenswrapper[4761]: I0307 09:14:43.768591 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.475116 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9"] Mar 07 09:15:00 crc kubenswrapper[4761]: E0307 09:15:00.482956 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8285a2d6-1653-46b3-ac0e-481bf33fa2e0" containerName="oc" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.483479 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8285a2d6-1653-46b3-ac0e-481bf33fa2e0" containerName="oc" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.484061 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8285a2d6-1653-46b3-ac0e-481bf33fa2e0" containerName="oc" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.491253 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.495873 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.496594 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.537833 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9"] Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.640385 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfthn\" (UniqueName: \"kubernetes.io/projected/c5c04e80-73b0-4955-9310-90ae9b38fcc5-kube-api-access-nfthn\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.640856 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5c04e80-73b0-4955-9310-90ae9b38fcc5-secret-volume\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.640905 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5c04e80-73b0-4955-9310-90ae9b38fcc5-config-volume\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.743930 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfthn\" (UniqueName: \"kubernetes.io/projected/c5c04e80-73b0-4955-9310-90ae9b38fcc5-kube-api-access-nfthn\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.744352 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5c04e80-73b0-4955-9310-90ae9b38fcc5-secret-volume\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.744421 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5c04e80-73b0-4955-9310-90ae9b38fcc5-config-volume\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.758039 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5c04e80-73b0-4955-9310-90ae9b38fcc5-config-volume\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.772650 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfthn\" (UniqueName: \"kubernetes.io/projected/c5c04e80-73b0-4955-9310-90ae9b38fcc5-kube-api-access-nfthn\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.773844 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5c04e80-73b0-4955-9310-90ae9b38fcc5-secret-volume\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.832321 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:02 crc kubenswrapper[4761]: I0307 09:15:02.571374 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9"] Mar 07 09:15:02 crc kubenswrapper[4761]: W0307 09:15:02.586664 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5c04e80_73b0_4955_9310_90ae9b38fcc5.slice/crio-3b5c615743ef655d968c4d45d452130a4ef3ca5a4fe6c80331ff1e6afbc4c1ee WatchSource:0}: Error finding container 3b5c615743ef655d968c4d45d452130a4ef3ca5a4fe6c80331ff1e6afbc4c1ee: Status 404 returned error can't find the container with id 3b5c615743ef655d968c4d45d452130a4ef3ca5a4fe6c80331ff1e6afbc4c1ee Mar 07 09:15:03 crc kubenswrapper[4761]: I0307 09:15:03.014929 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" event={"ID":"c5c04e80-73b0-4955-9310-90ae9b38fcc5","Type":"ContainerStarted","Data":"025ed924ee34200e1255537538153a42cdbc5089fa087c6e0a15c72b21c5682d"} Mar 07 09:15:03 crc kubenswrapper[4761]: I0307 09:15:03.015209 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" event={"ID":"c5c04e80-73b0-4955-9310-90ae9b38fcc5","Type":"ContainerStarted","Data":"3b5c615743ef655d968c4d45d452130a4ef3ca5a4fe6c80331ff1e6afbc4c1ee"} Mar 07 09:15:03 crc kubenswrapper[4761]: I0307 09:15:03.047965 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" podStartSLOduration=3.047585874 podStartE2EDuration="3.047585874s" podCreationTimestamp="2026-03-07 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 09:15:03.030589653 +0000 UTC m=+5159.939756128" watchObservedRunningTime="2026-03-07 09:15:03.047585874 +0000 UTC m=+5159.956752349" Mar 07 09:15:05 crc kubenswrapper[4761]: I0307 09:15:05.043679 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" event={"ID":"c5c04e80-73b0-4955-9310-90ae9b38fcc5","Type":"ContainerDied","Data":"025ed924ee34200e1255537538153a42cdbc5089fa087c6e0a15c72b21c5682d"} Mar 07 09:15:05 crc kubenswrapper[4761]: I0307 09:15:05.046551 4761 generic.go:334] "Generic (PLEG): container finished" podID="c5c04e80-73b0-4955-9310-90ae9b38fcc5" containerID="025ed924ee34200e1255537538153a42cdbc5089fa087c6e0a15c72b21c5682d" exitCode=0 Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.094783 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.171027 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" event={"ID":"c5c04e80-73b0-4955-9310-90ae9b38fcc5","Type":"ContainerDied","Data":"3b5c615743ef655d968c4d45d452130a4ef3ca5a4fe6c80331ff1e6afbc4c1ee"} Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.171598 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.178656 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b5c615743ef655d968c4d45d452130a4ef3ca5a4fe6c80331ff1e6afbc4c1ee" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.278841 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfthn\" (UniqueName: \"kubernetes.io/projected/c5c04e80-73b0-4955-9310-90ae9b38fcc5-kube-api-access-nfthn\") pod \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.279175 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5c04e80-73b0-4955-9310-90ae9b38fcc5-config-volume\") pod \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.279298 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5c04e80-73b0-4955-9310-90ae9b38fcc5-secret-volume\") pod \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.310948 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c04e80-73b0-4955-9310-90ae9b38fcc5-config-volume" (OuterVolumeSpecName: "config-volume") pod "c5c04e80-73b0-4955-9310-90ae9b38fcc5" (UID: "c5c04e80-73b0-4955-9310-90ae9b38fcc5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.369730 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c04e80-73b0-4955-9310-90ae9b38fcc5-kube-api-access-nfthn" (OuterVolumeSpecName: "kube-api-access-nfthn") pod "c5c04e80-73b0-4955-9310-90ae9b38fcc5" (UID: "c5c04e80-73b0-4955-9310-90ae9b38fcc5"). InnerVolumeSpecName "kube-api-access-nfthn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.383178 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c04e80-73b0-4955-9310-90ae9b38fcc5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c5c04e80-73b0-4955-9310-90ae9b38fcc5" (UID: "c5c04e80-73b0-4955-9310-90ae9b38fcc5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.388262 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5c04e80-73b0-4955-9310-90ae9b38fcc5-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.388310 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfthn\" (UniqueName: \"kubernetes.io/projected/c5c04e80-73b0-4955-9310-90ae9b38fcc5-kube-api-access-nfthn\") on node \"crc\" DevicePath \"\"" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.388324 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5c04e80-73b0-4955-9310-90ae9b38fcc5-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 09:15:08 crc kubenswrapper[4761]: I0307 09:15:08.415042 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9"] Mar 07 09:15:08 crc kubenswrapper[4761]: I0307 09:15:08.429610 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9"] Mar 07 09:15:09 crc kubenswrapper[4761]: I0307 09:15:09.725476 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14b5f1dc-f0be-4c41-87a0-d623568079c0" path="/var/lib/kubelet/pods/14b5f1dc-f0be-4c41-87a0-d623568079c0/volumes" Mar 07 09:15:13 crc kubenswrapper[4761]: I0307 09:15:13.771756 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:15:13 crc kubenswrapper[4761]: I0307 09:15:13.778781 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:15:13 crc kubenswrapper[4761]: I0307 09:15:13.778834 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 09:15:13 crc kubenswrapper[4761]: I0307 09:15:13.788176 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"45493895bc908f690bc18f8d9a3f4e9f36cdf8af714be35170fe2ff42764c391"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:15:13 crc kubenswrapper[4761]: I0307 09:15:13.789620 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://45493895bc908f690bc18f8d9a3f4e9f36cdf8af714be35170fe2ff42764c391" gracePeriod=600 Mar 07 09:15:14 crc kubenswrapper[4761]: I0307 09:15:14.273446 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"45493895bc908f690bc18f8d9a3f4e9f36cdf8af714be35170fe2ff42764c391"} Mar 07 09:15:14 crc kubenswrapper[4761]: I0307 09:15:14.274089 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="45493895bc908f690bc18f8d9a3f4e9f36cdf8af714be35170fe2ff42764c391" exitCode=0 Mar 07 09:15:14 crc kubenswrapper[4761]: I0307 09:15:14.278499 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:15:15 crc kubenswrapper[4761]: I0307 09:15:15.286833 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311"} Mar 07 09:15:16 crc kubenswrapper[4761]: I0307 09:15:16.956466 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:16 crc kubenswrapper[4761]: I0307 09:15:16.956466 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:16 crc kubenswrapper[4761]: I0307 09:15:16.958899 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:16 crc kubenswrapper[4761]: I0307 09:15:16.958900 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:17 crc kubenswrapper[4761]: I0307 09:15:17.243111 4761 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5n9bv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:17 crc kubenswrapper[4761]: I0307 09:15:17.243483 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podUID="9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:17 crc kubenswrapper[4761]: I0307 09:15:17.804765 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:15:23 crc kubenswrapper[4761]: I0307 09:15:23.940664 4761 scope.go:117] "RemoveContainer" containerID="9ec68e26cc56db9378f3e81051fa808a0bd9358047a5695ad5208364aef8551a" Mar 07 09:15:26 crc kubenswrapper[4761]: I0307 09:15:26.959090 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:26 crc kubenswrapper[4761]: I0307 09:15:26.959086 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:26 crc kubenswrapper[4761]: I0307 09:15:26.964772 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:26 crc kubenswrapper[4761]: I0307 09:15:26.964850 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:43 crc kubenswrapper[4761]: I0307 09:15:43.669967 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" podUID="bf4af368-4dee-4a4a-8c43-fd7991ac3366" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:43 crc kubenswrapper[4761]: I0307 09:15:43.746070 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:43 crc kubenswrapper[4761]: I0307 09:15:43.750359 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" podUID="9554e552-2329-4e93-835e-9dbcad7b7519" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.583023 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.586008 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.583122 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.586495 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.970946 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" podUID="353016f5-6859-4193-9845-69bf540c7ab3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.971019 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" podUID="0ce5a055-df90-4071-a5cf-f7361e01e5fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.970976 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.971118 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.031689 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.031769 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.032059 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" podUID="9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033080 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033156 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033184 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" podUID="b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033209 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033235 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033637 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podUID="6c6a959e-39ee-46ae-9cc5-03fe72cedb7a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033695 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" podUID="a4bc9370-c64d-4e5e-a0bd-70297abb8c0d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033761 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" podUID="9dcfc7f8-35e7-4fab-bb7a-c900caf10641" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033923 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" podUID="b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.034177 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" podUID="baefa6a4-53d3-4158-a74f-87c9b766d760" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.742053 4761 patch_prober.go:28] interesting pod/metrics-server-854cd44758-k9qwx container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.742538 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" podUID="4d4f9001-7d67-467b-8028-ec6162564829" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.230519 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.530272 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.530527 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.530325 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.530882 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.828438 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.828635 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.828498 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.828750 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.953904 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.954226 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.954021 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.954494 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.461628 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.461636 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.467200 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.467299 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.619002 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podUID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.619230 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podUID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.706987 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" podUID="563c8932-7287-4158-bb9a-7f464230ae9f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.835456 4761 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.835584 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.293429 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.293504 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.293560 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.293508 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.300062 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.300095 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.300125 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.300130 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.367936 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.367992 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.368038 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.450896 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-86ddb6bd46-m2tp4" podUID="adfa916b-8977-446f-9387-932788e51e10" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.451226 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-86ddb6bd46-m2tp4" podUID="adfa916b-8977-446f-9387-932788e51e10" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.559794 4761 patch_prober.go:28] interesting pod/console-56dd85c946-zcd4c container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.559838 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-56dd85c946-zcd4c" podUID="8bf201ac-6f66-42fb-83bd-d5faaf6dd126" containerName="console" probeResult="failure" output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.294665 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.370890 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.636664 4761 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.637569 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.802844 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.804595 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.884876 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.884878 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.884935 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.884986 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024009 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024072 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024116 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024235 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024255 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024269 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024198 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024318 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024454 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" podUID="0febfb54-7188-4247-8d9b-2f166bf597ee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024454 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024581 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.064972 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podUID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.209961 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podUID="bc92e2bf-a093-4327-a1cd-807a2d916864" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.327010 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.582710 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.582797 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.582920 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.582997 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.732171 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-9475l" podUID="0013064e-ed56-415d-b236-1c92e98194d5" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:16:05 crc kubenswrapper[4761]: I0307 09:16:05.400501 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:05 crc kubenswrapper[4761]: I0307 09:16:05.400536 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:05 crc kubenswrapper[4761]: I0307 09:16:05.701425 4761 patch_prober.go:28] interesting pod/metrics-server-854cd44758-k9qwx container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:05 crc kubenswrapper[4761]: I0307 09:16:05.701507 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" podUID="4d4f9001-7d67-467b-8028-ec6162564829" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.115753 4761 patch_prober.go:28] interesting pod/monitoring-plugin-67c8dd59f5-sbh4r container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.116601 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" podUID="08721f50-8882-42b0-9370-cbe4508753d3" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.276093 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.276133 4761 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-9vsj5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.276430 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" podUID="0868ef7f-3f74-41e3-bc81-8cf20dc88c43" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.276170 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.307381 4761 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-d62lh container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.307454 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" podUID="6092a906-c0c5-4dcd-bb59-a9ea6a3f2745" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.447237 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.447265 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.447328 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.447315 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.450600 4761 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-f9kfv container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.450638 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" podUID="c0d9aa49-bf5e-4663-9523-a67b07e95721" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.588907 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.588976 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.588982 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.588988 4761 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-pvm88 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.589073 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.589121 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" podUID="22aee2b0-8c5f-486a-b74f-51b6452c7f8c" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.806745 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.806805 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.806814 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.806905 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.968944 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.969025 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.969043 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.969130 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.969185 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.969166 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.969103 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.969236 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.091930 4761 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-52lfx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.092221 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.091930 4761 patch_prober.go:28] interesting pod/loki-operator-controller-manager-6d4c45cc-fmrsq container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.092401 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" podUID="8a7603da-0d59-431b-82c9-59c887e9f8d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.092021 4761 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-52lfx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.092480 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.194180 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.194235 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.194938 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.195006 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.201288 4761 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5n9bv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.201343 4761 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5n9bv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.201409 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podUID="9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.201346 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podUID="9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.293632 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.293699 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.293765 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.293779 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.299351 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.299381 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.299551 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.299704 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.408147 4761 patch_prober.go:28] interesting pod/thanos-querier-6f4577c6dd-q542m container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.408212 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" podUID="fe7ce149-7c15-4b79-a744-d98a58d8407d" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.452400 4761 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.452465 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="133e9b5e-adcc-4dd6-b762-fc29c779b70a" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.499053 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.499148 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.499311 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.499356 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.510514 4761 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.510569 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="ed3dc6dd-e534-41c2-b652-4aa0714797a0" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.589896 4761 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.589970 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="2d390fba-d423-4b88-90b2-0b291fe8e35b" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.803387 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.805844 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:08 crc kubenswrapper[4761]: I0307 09:16:08.801208 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:08 crc kubenswrapper[4761]: I0307 09:16:08.801265 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:08 crc kubenswrapper[4761]: I0307 09:16:08.803938 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:08 crc kubenswrapper[4761]: I0307 09:16:08.804771 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2bdde810-6429-4553-a9bb-1ccef1f89e2d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 07 09:16:08 crc kubenswrapper[4761]: I0307 09:16:08.804931 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:08 crc kubenswrapper[4761]: I0307 09:16:08.808918 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2bdde810-6429-4553-a9bb-1ccef1f89e2d" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.039986 4761 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-vrchq container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.041314 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" podUID="fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.290706 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="ed86dd3e-17e0-467b-8243-8209a04dcbe1" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.17:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.290761 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="ed86dd3e-17e0-467b-8243-8209a04dcbe1" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.17:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.359079 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" podUID="6bdda9de-4711-4fbc-b9d2-5f867691450a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.359084 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" podUID="6bdda9de-4711-4fbc-b9d2-5f867691450a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.785025 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" podUID="bd23eeaa-ed7e-45ea-9a40-613ac4e11120" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.785285 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" podUID="bd23eeaa-ed7e-45ea-9a40-613ac4e11120" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.170868 4761 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kfph9 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.170940 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" podUID="b17d76c5-b5d9-4f79-841e-287d05540b40" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.170888 4761 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kfph9 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.171001 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" podUID="b17d76c5-b5d9-4f79-841e-287d05540b40" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.232018 4761 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-4l52t container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.16:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.232527 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" podUID="0c90c3e5-de84-4cb1-ac22-fe02ca708196" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.16:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.399331 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.399375 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.499737 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.499797 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.499822 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.499838 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.503976 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.504025 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.507227 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"43fecf17bd70cc24f894f9981f36f699613214c657ae37df741e21de54a09dc3"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.507649 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" containerID="cri-o://43fecf17bd70cc24f894f9981f36f699613214c657ae37df741e21de54a09dc3" gracePeriod=30 Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.807738 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-b5t8f" podUID="26b26086-7428-4218-a5c0-64eb4a9d581f" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.807751 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-b5t8f" podUID="26b26086-7428-4218-a5c0-64eb4a9d581f" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.161224 4761 trace.go:236] Trace[907017605]: "Calculate volume metrics of ovndbcluster-sb-etc-ovn for pod openstack/ovsdbserver-sb-0" (07-Mar-2026 09:16:05.683) (total time: 5471ms): Mar 07 09:16:11 crc kubenswrapper[4761]: Trace[907017605]: [5.471132516s] [5.471132516s] END Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.210925 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" podUID="4c23f924-b431-4a3e-819b-713e132885f4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.457315 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.457309 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.458439 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.458440 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.505174 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.505241 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.627109 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podUID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.627252 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podUID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.810855 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-5p7lw" podUID="dc70d269-9a38-4cf3-a494-956420600965" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.810867 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-5p7lw" podUID="dc70d269-9a38-4cf3-a494-956420600965" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.839368 4761 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.839460 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.389880 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.389938 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.389981 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.389978 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.390050 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.390118 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.389891 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.390166 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.390362 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.390368 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.473920 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.473936 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-86ddb6bd46-m2tp4" podUID="adfa916b-8977-446f-9387-932788e51e10" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.474120 4761 patch_prober.go:28] interesting pod/thanos-querier-6f4577c6dd-q542m container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.474153 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" podUID="fe7ce149-7c15-4b79-a744-d98a58d8407d" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.474192 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-86ddb6bd46-m2tp4" podUID="adfa916b-8977-446f-9387-932788e51e10" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.499147 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.499211 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.544336 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-dbw8z" podUID="de1f85b3-124d-434b-b053-4a24859497f1" containerName="registry-server" probeResult="failure" output=< Mar 07 09:16:12 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:16:12 crc kubenswrapper[4761]: > Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.544349 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-dbw8z" podUID="de1f85b3-124d-434b-b053-4a24859497f1" containerName="registry-server" probeResult="failure" output=< Mar 07 09:16:12 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:16:12 crc kubenswrapper[4761]: > Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.544492 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-hqkkk" podUID="b9d0650f-8057-46e1-a006-f240615ce96f" containerName="registry-server" probeResult="failure" output=< Mar 07 09:16:12 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:16:12 crc kubenswrapper[4761]: > Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.544403 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-hqkkk" podUID="b9d0650f-8057-46e1-a006-f240615ce96f" containerName="registry-server" probeResult="failure" output=< Mar 07 09:16:12 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:16:12 crc kubenswrapper[4761]: > Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.559650 4761 patch_prober.go:28] interesting pod/console-56dd85c946-zcd4c container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.559730 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-56dd85c946-zcd4c" podUID="8bf201ac-6f66-42fb-83bd-d5faaf6dd126" containerName="console" probeResult="failure" output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.657389 4761 trace.go:236] Trace[1431406075]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (07-Mar-2026 09:16:11.550) (total time: 1105ms): Mar 07 09:16:12 crc kubenswrapper[4761]: Trace[1431406075]: [1.105970594s] [1.105970594s] END Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.938036 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" podUID="ffb7fdc9-854e-4990-81e1-b14fb9966476" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.938583 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" podUID="ffb7fdc9-854e-4990-81e1-b14fb9966476" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.466942 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" podUID="bf4af368-4dee-4a4a-8c43-fd7991ac3366" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.466968 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.467188 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" podUID="bf4af368-4dee-4a4a-8c43-fd7991ac3366" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.591001 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.632957 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.633062 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" podUID="9554e552-2329-4e93-835e-9dbcad7b7519" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.633503 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" podUID="9554e552-2329-4e93-835e-9dbcad7b7519" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.714941 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.714949 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" podUID="9dcfc7f8-35e7-4fab-bb7a-c900caf10641" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.755914 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" podUID="a4bc9370-c64d-4e5e-a0bd-70297abb8c0d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.755993 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" podUID="a4bc9370-c64d-4e5e-a0bd-70297abb8c0d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.755914 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" podUID="b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.808593 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.808596 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.809332 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-j8w2n" podUID="69902561-929c-428a-8dab-7a9a91fb3084" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.809874 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-j8w2n" podUID="69902561-929c-428a-8dab-7a9a91fb3084" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.839089 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" podUID="0ce5a055-df90-4071-a5cf-f7361e01e5fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.839610 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" podUID="9dcfc7f8-35e7-4fab-bb7a-c900caf10641" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.839590 4761 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.840957 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.845518 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.923280 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" podUID="0ce5a055-df90-4071-a5cf-f7361e01e5fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.005060 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.005112 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.170078 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" podUID="0febfb54-7188-4247-8d9b-2f166bf597ee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.170960 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" podUID="baefa6a4-53d3-4158-a74f-87c9b766d760" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.171956 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.171994 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.211919 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" podUID="baefa6a4-53d3-4158-a74f-87c9b766d760" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.211953 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.211896 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.212019 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.212069 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.212361 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.212404 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.294970 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.295057 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.460985 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.461045 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.461028 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" podUID="0febfb54-7188-4247-8d9b-2f166bf597ee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.678023 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podUID="bc92e2bf-a093-4327-a1cd-807a2d916864" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.678094 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" podUID="353016f5-6859-4193-9845-69bf540c7ab3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.719151 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" podUID="353016f5-6859-4193-9845-69bf540c7ab3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.719234 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" podUID="9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760135 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podUID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760178 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" podUID="efa0b70d-ed5b-48ba-a601-bfc64689ed5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760290 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podUID="6c6a959e-39ee-46ae-9cc5-03fe72cedb7a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760322 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" podUID="efa0b70d-ed5b-48ba-a601-bfc64689ed5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760347 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podUID="bc92e2bf-a093-4327-a1cd-807a2d916864" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760352 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" podUID="9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760369 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760390 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760532 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podUID="6c6a959e-39ee-46ae-9cc5-03fe72cedb7a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760574 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podUID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760593 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" podUID="6540426d-eaf7-4f8f-ab46-8305c545e1cb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.761011 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" podUID="6540426d-eaf7-4f8f-ab46-8305c545e1cb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.806806 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2bdde810-6429-4553-a9bb-1ccef1f89e2d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.399469 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.399470 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.401752 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.428389 4761 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-2tcxw container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.428464 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" podUID="071d5325-8638-4180-aefa-fb07f5533bb2" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.14:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.499468 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.499549 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.699792 4761 patch_prober.go:28] interesting pod/metrics-server-854cd44758-k9qwx container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.699820 4761 patch_prober.go:28] interesting pod/metrics-server-854cd44758-k9qwx container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.699884 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" podUID="4d4f9001-7d67-467b-8028-ec6162564829" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.699948 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" podUID="4d4f9001-7d67-467b-8028-ec6162564829" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.116906 4761 patch_prober.go:28] interesting pod/monitoring-plugin-67c8dd59f5-sbh4r container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/health\": context deadline exceeded" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.117302 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" podUID="08721f50-8882-42b0-9370-cbe4508753d3" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.88:9443/health\": context deadline exceeded" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.236705 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.236923 4761 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-9vsj5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.236956 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" podUID="0868ef7f-3f74-41e3-bc81-8cf20dc88c43" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.308451 4761 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-d62lh container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.308533 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" podUID="6092a906-c0c5-4dcd-bb59-a9ea6a3f2745" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.446090 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.446100 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.446168 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.446250 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.450059 4761 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-f9kfv container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.450110 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" podUID="c0d9aa49-bf5e-4663-9523-a67b07e95721" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.588896 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.588900 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.588954 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.588990 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.588959 4761 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-pvm88 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.589046 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" podUID="22aee2b0-8c5f-486a-b74f-51b6452c7f8c" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.746971 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" podUID="563c8932-7287-4158-bb9a-7f464230ae9f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.746975 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" podUID="563c8932-7287-4158-bb9a-7f464230ae9f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.829943 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.829977 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.830017 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.829936 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.830018 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.830065 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.968873 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.968946 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.968986 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.968996 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.969024 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.969062 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.969072 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.969120 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.132952 4761 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-52lfx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.133024 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.133144 4761 patch_prober.go:28] interesting pod/loki-operator-controller-manager-6d4c45cc-fmrsq container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.133187 4761 patch_prober.go:28] interesting pod/loki-operator-controller-manager-6d4c45cc-fmrsq container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.51:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.133224 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" podUID="8a7603da-0d59-431b-82c9-59c887e9f8d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.133241 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" podUID="8a7603da-0d59-431b-82c9-59c887e9f8d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.51:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.133155 4761 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-52lfx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.133301 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.193755 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.193807 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.193818 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.193912 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.201189 4761 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5n9bv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.201279 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podUID="9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.201364 4761 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5n9bv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.201401 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podUID="9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.292706 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.292837 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.292937 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.292855 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.300037 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.300071 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.300117 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.300130 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: E0307 09:16:17.402994 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T09:16:07Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T09:16:07Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T09:16:07Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T09:16:07Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.407901 4761 patch_prober.go:28] interesting pod/thanos-querier-6f4577c6dd-q542m container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.407984 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" podUID="fe7ce149-7c15-4b79-a744-d98a58d8407d" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.451405 4761 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.451464 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="133e9b5e-adcc-4dd6-b762-fc29c779b70a" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.509811 4761 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.509894 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="ed3dc6dd-e534-41c2-b652-4aa0714797a0" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.588912 4761 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.588969 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="2d390fba-d423-4b88-90b2-0b291fe8e35b" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.733296 4761 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.733375 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.804176 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.804495 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.804507 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.057184 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": context deadline exceeded" start-of-body= Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.057246 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": context deadline exceeded" Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.402256 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.500408 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.500483 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.668015 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.668091 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.800992 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.802449 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.044118 4761 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-vrchq container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.044216 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" podUID="fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.289261 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="ed86dd3e-17e0-467b-8243-8209a04dcbe1" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.17:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.337832 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" podUID="6bdda9de-4711-4fbc-b9d2-5f867691450a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.440727 4761 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-2tcxw container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.441025 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" podUID="071d5325-8638-4180-aefa-fb07f5533bb2" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.742951 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" podUID="bd23eeaa-ed7e-45ea-9a40-613ac4e11120" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.804346 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.157964 4761 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kfph9 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.158318 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" podUID="b17d76c5-b5d9-4f79-841e-287d05540b40" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.158023 4761 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kfph9 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.158410 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" podUID="b17d76c5-b5d9-4f79-841e-287d05540b40" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.275930 4761 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-4l52t container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.16:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.275992 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" podUID="0c90c3e5-de84-4cb1-ac22-fe02ca708196" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.16:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.276110 4761 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-4l52t container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.16:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.276175 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" podUID="0c90c3e5-de84-4cb1-ac22-fe02ca708196" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.16:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.399264 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.765938 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.766043 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.766705 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-9475l" podUID="0013064e-ed56-415d-b236-1c92e98194d5" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.805978 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-b5t8f" podUID="26b26086-7428-4218-a5c0-64eb4a9d581f" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.806051 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2bdde810-6429-4553-a9bb-1ccef1f89e2d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.808875 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.812236 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"1443e56814c28961324049739b81f51a64652ab0da2dbb7afb348838a00f0e1f"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.821182 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2bdde810-6429-4553-a9bb-1ccef1f89e2d" containerName="ceilometer-central-agent" containerID="cri-o://1443e56814c28961324049739b81f51a64652ab0da2dbb7afb348838a00f0e1f" gracePeriod=30 Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.209985 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" podUID="4c23f924-b431-4a3e-819b-713e132885f4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.341484 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" event={"ID":"353016f5-6859-4193-9845-69bf540c7ab3","Type":"ContainerDied","Data":"9529675e209c306d435007b407e16ef496a81a344295e555eb7a95c23cc1f4d7"} Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.341421 4761 generic.go:334] "Generic (PLEG): container finished" podID="353016f5-6859-4193-9845-69bf540c7ab3" containerID="9529675e209c306d435007b407e16ef496a81a344295e555eb7a95c23cc1f4d7" exitCode=1 Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.346943 4761 scope.go:117] "RemoveContainer" containerID="9529675e209c306d435007b407e16ef496a81a344295e555eb7a95c23cc1f4d7" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.402806 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.455847 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.455889 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.455921 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.455995 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.455942 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.456584 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.457844 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"135b390898a3a827582358761059bcc63765210ef6cca72bb8fb26dcfd8484b3"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.457884 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" containerID="cri-o://135b390898a3a827582358761059bcc63765210ef6cca72bb8fb26dcfd8484b3" gracePeriod=30 Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.499418 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.499467 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.618955 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podUID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.619029 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.619220 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podUID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.619316 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.633780 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"86236478da25e68057a2cee3c5365a7298b691e70eadb4671f40e0f1ed9dd870"} pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" containerMessage="Container webhook-server failed liveness probe, will be restarted" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.633916 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podUID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerName="webhook-server" containerID="cri-o://86236478da25e68057a2cee3c5365a7298b691e70eadb4671f40e0f1ed9dd870" gracePeriod=2 Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.711123 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" podUID="563c8932-7287-4158-bb9a-7f464230ae9f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.802096 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.804135 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-b5t8f" podUID="26b26086-7428-4218-a5c0-64eb4a9d581f" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.808034 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-5p7lw" podUID="dc70d269-9a38-4cf3-a494-956420600965" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.808659 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-5p7lw" podUID="dc70d269-9a38-4cf3-a494-956420600965" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.835444 4761 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.835540 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.835630 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.840834 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"1e4bb8a136f486751b2586b4b85b0c32cd97d2886348c6d4c4ba348f5d98a501"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.841199 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://1e4bb8a136f486751b2586b4b85b0c32cd97d2886348c6d4c4ba348f5d98a501" gracePeriod=30 Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.367892 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.368246 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-lzrcd" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.368270 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.367936 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.368410 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lzrcd" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.368437 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-lzrcd" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.370139 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"734ddf0b9f61b47fe6555044a7cc84fd3ee785ebb8420becbfe23b851f2d2a4b"} pod="metallb-system/frr-k8s-lzrcd" containerMessage="Container controller failed liveness probe, will be restarted" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.370177 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"ff300f5efb12f93334587e3d904527e3fa66d7acbf968e46eb5467420491f1c9"} pod="metallb-system/frr-k8s-lzrcd" containerMessage="Container frr failed liveness probe, will be restarted" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.370285 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" containerID="cri-o://734ddf0b9f61b47fe6555044a7cc84fd3ee785ebb8420becbfe23b851f2d2a4b" gracePeriod=2 Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.408142 4761 patch_prober.go:28] interesting pod/thanos-querier-6f4577c6dd-q542m container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.408210 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" podUID="fe7ce149-7c15-4b79-a744-d98a58d8407d" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.449919 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-86ddb6bd46-m2tp4" podUID="adfa916b-8977-446f-9387-932788e51e10" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.449991 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.450545 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-86ddb6bd46-m2tp4" podUID="adfa916b-8977-446f-9387-932788e51e10" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.450674 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.451338 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"f5996be1025b4f5f0291b64b79b5aca0f48f5e117c9cede01f26742efeeaacd6"} pod="metallb-system/controller-86ddb6bd46-m2tp4" containerMessage="Container controller failed liveness probe, will be restarted" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.451400 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/controller-86ddb6bd46-m2tp4" podUID="adfa916b-8977-446f-9387-932788e51e10" containerName="controller" containerID="cri-o://f5996be1025b4f5f0291b64b79b5aca0f48f5e117c9cede01f26742efeeaacd6" gracePeriod=2 Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.457235 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.457433 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.560432 4761 patch_prober.go:28] interesting pod/console-56dd85c946-zcd4c container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.560514 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-56dd85c946-zcd4c" podUID="8bf201ac-6f66-42fb-83bd-d5faaf6dd126" containerName="console" probeResult="failure" output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.560621 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 09:16:22 crc kubenswrapper[4761]: E0307 09:16:22.653858 4761 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.733233 4761 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.733312 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.740821 4761 trace.go:236] Trace[775356276]: "Calculate volume metrics of glance for pod openstack/glance-default-internal-api-0" (07-Mar-2026 09:16:18.611) (total time: 4108ms): Mar 07 09:16:22 crc kubenswrapper[4761]: Trace[775356276]: [4.108625566s] [4.108625566s] END Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.941929 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" podUID="ffb7fdc9-854e-4990-81e1-b14fb9966476" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.942427 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" podUID="ffb7fdc9-854e-4990-81e1-b14fb9966476" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.949318 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.343870 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" podUID="bf4af368-4dee-4a4a-8c43-fd7991ac3366" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.343907 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.344054 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.344209 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" podUID="9554e552-2329-4e93-835e-9dbcad7b7519" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.378315 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" event={"ID":"353016f5-6859-4193-9845-69bf540c7ab3","Type":"ContainerStarted","Data":"90548e303820cef83d4953e821f43164479d4ed0d70e3a267e7e63ed5daa646c"} Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.378566 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.382871 4761 generic.go:334] "Generic (PLEG): container finished" podID="8a7603da-0d59-431b-82c9-59c887e9f8d6" containerID="3517be816e0f2c5d9edb7477be01e1bc04ecb675d7079511a75d3e4d093fa6bd" exitCode=1 Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.382903 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" event={"ID":"8a7603da-0d59-431b-82c9-59c887e9f8d6","Type":"ContainerDied","Data":"3517be816e0f2c5d9edb7477be01e1bc04ecb675d7079511a75d3e4d093fa6bd"} Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.384353 4761 scope.go:117] "RemoveContainer" containerID="3517be816e0f2c5d9edb7477be01e1bc04ecb675d7079511a75d3e4d093fa6bd" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.425950 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.426293 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.466979 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" podUID="a4bc9370-c64d-4e5e-a0bd-70297abb8c0d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.466986 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.509527 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" podUID="9dcfc7f8-35e7-4fab-bb7a-c900caf10641" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.593910 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" podUID="b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.593993 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" podUID="b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.594087 4761 patch_prober.go:28] interesting pod/console-56dd85c946-zcd4c container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.594248 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-56dd85c946-zcd4c" podUID="8bf201ac-6f66-42fb-83bd-d5faaf6dd126" containerName="console" probeResult="failure" output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.656870 4761 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.656928 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.656998 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.656874 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" podUID="0ce5a055-df90-4071-a5cf-f7361e01e5fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.762992 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" podUID="baefa6a4-53d3-4158-a74f-87c9b766d760" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.802707 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-hqkkk" podUID="b9d0650f-8057-46e1-a006-f240615ce96f" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.805821 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.807428 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-dbw8z" podUID="de1f85b3-124d-434b-b053-4a24859497f1" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.807484 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-j8w2n" podUID="69902561-929c-428a-8dab-7a9a91fb3084" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.809264 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-hqkkk" podUID="b9d0650f-8057-46e1-a006-f240615ce96f" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.810930 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-j8w2n" podUID="69902561-929c-428a-8dab-7a9a91fb3084" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.819234 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-dbw8z" podUID="de1f85b3-124d-434b-b053-4a24859497f1" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.888871 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.888930 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.888867 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" podUID="2db89b29-3889-4242-9ede-98140f3f8319" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.889031 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.971984 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.972070 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-75b4z" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.971917 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.972540 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.972598 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.973763 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"728c9d850e2981887e404b1c4d33ab7b98374d312289c774f73b86896ee865e6"} pod="metallb-system/speaker-75b4z" containerMessage="Container speaker failed liveness probe, will be restarted" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.973840 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" containerID="cri-o://728c9d850e2981887e404b1c4d33ab7b98374d312289c774f73b86896ee865e6" gracePeriod=2 Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.013925 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" podUID="0febfb54-7188-4247-8d9b-2f166bf597ee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.014033 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015174 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015216 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015264 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015282 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015307 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015314 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015345 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015395 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015418 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015446 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.016235 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"a384d8e72abca7f94ea5bd0fc5a2b830afa37dc0ef04bd043411a226b34f720c"} pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.016274 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" containerID="cri-o://a384d8e72abca7f94ea5bd0fc5a2b830afa37dc0ef04bd043411a226b34f720c" gracePeriod=30 Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.022889 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"d06988cae3b64334503e789d0e91e85389860e0eeeb5b563991a7021feb36127"} pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" containerMessage="Container controller-manager failed liveness probe, will be restarted" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.022944 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" containerID="cri-o://d06988cae3b64334503e789d0e91e85389860e0eeeb5b563991a7021feb36127" gracePeriod=30 Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.055973 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" podUID="0bfdda94-7f9c-45d0-897f-0b65cf16e0fd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.055985 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.056146 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-75b4z" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.081961 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.097227 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podUID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.097339 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.098424 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" podUID="9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.100200 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.172915 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" podUID="6540426d-eaf7-4f8f-ab46-8305c545e1cb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.255132 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podUID="bc92e2bf-a093-4327-a1cd-807a2d916864" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.255160 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podUID="6c6a959e-39ee-46ae-9cc5-03fe72cedb7a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.255258 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.325931 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.326065 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.368022 4761 scope.go:117] "RemoveContainer" containerID="c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.386895 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.394734 4761 generic.go:334] "Generic (PLEG): container finished" podID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerID="86236478da25e68057a2cee3c5365a7298b691e70eadb4671f40e0f1ed9dd870" exitCode=137 Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.394762 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" event={"ID":"3dc06a77-85c3-42a9-a972-c3f33e46df4b","Type":"ContainerDied","Data":"86236478da25e68057a2cee3c5365a7298b691e70eadb4671f40e0f1ed9dd870"} Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.404152 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.406530 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.407953 4761 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2e6600470846f9c3bf1c986582e80c3146e6e566fe0df1156004be04da8a6964" exitCode=1 Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.409054 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2e6600470846f9c3bf1c986582e80c3146e6e566fe0df1156004be04da8a6964"} Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.409094 4761 scope.go:117] "RemoveContainer" containerID="c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.412459 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="marketplace-operator" containerStatusID={"Type":"cri-o","ID":"b6193bd889ffe38e7587c3bf176f03324132a8fe93273085b2960f8bc71d2e62"} pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" containerMessage="Container marketplace-operator failed liveness probe, will be restarted" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.412507 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" containerID="cri-o://b6193bd889ffe38e7587c3bf176f03324132a8fe93273085b2960f8bc71d2e62" gracePeriod=30 Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.413594 4761 scope.go:117] "RemoveContainer" containerID="2e6600470846f9c3bf1c986582e80c3146e6e566fe0df1156004be04da8a6964" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.468961 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" podUID="efa0b70d-ed5b-48ba-a601-bfc64689ed5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.468983 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.498609 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.498931 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:24 crc kubenswrapper[4761]: E0307 09:16:24.747026 4761 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-controller-manager_kube-controller-manager-crc_openshift-kube-controller-manager_f614b9022728cf315e60c057852e563e_0 in pod sandbox 265ca0a962ac9ab2261ea0ec32995d99ca075f8c9c50be0e19eb258b0174b0b2 from index: no such id: 'c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942'" containerID="c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.747098 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942"} err="rpc error: code = Unknown desc = failed to delete container k8s_kube-controller-manager_kube-controller-manager-crc_openshift-kube-controller-manager_f614b9022728cf315e60c057852e563e_0 in pod sandbox 265ca0a962ac9ab2261ea0ec32995d99ca075f8c9c50be0e19eb258b0174b0b2 from index: no such id: 'c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942'" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.831639 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.930914 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.931058 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.055929 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" podUID="0febfb54-7188-4247-8d9b-2f166bf597ee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.238552 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="frr" containerID="cri-o://ff300f5efb12f93334587e3d904527e3fa66d7acbf968e46eb5467420491f1c9" gracePeriod=2 Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.297928 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podUID="bc92e2bf-a093-4327-a1cd-807a2d916864" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.329857 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.335435 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.466269 4761 generic.go:334] "Generic (PLEG): container finished" podID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerID="734ddf0b9f61b47fe6555044a7cc84fd3ee785ebb8420becbfe23b851f2d2a4b" exitCode=137 Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.466358 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerDied","Data":"734ddf0b9f61b47fe6555044a7cc84fd3ee785ebb8420becbfe23b851f2d2a4b"} Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.472503 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.474990 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.478637 4761 generic.go:334] "Generic (PLEG): container finished" podID="adfa916b-8977-446f-9387-932788e51e10" containerID="f5996be1025b4f5f0291b64b79b5aca0f48f5e117c9cede01f26742efeeaacd6" exitCode=137 Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.478680 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-m2tp4" event={"ID":"adfa916b-8977-446f-9387-932788e51e10","Type":"ContainerDied","Data":"f5996be1025b4f5f0291b64b79b5aca0f48f5e117c9cede01f26742efeeaacd6"} Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.699396 4761 patch_prober.go:28] interesting pod/metrics-server-854cd44758-k9qwx container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.699486 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" podUID="4d4f9001-7d67-467b-8028-ec6162564829" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.699546 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.700645 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"edc3b91ba9c93fdc8b8f4ab8405a9cde976a43eb0938c38a97b875a93e760b4c"} pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" containerMessage="Container metrics-server failed liveness probe, will be restarted" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.700965 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" podUID="4d4f9001-7d67-467b-8028-ec6162564829" containerName="metrics-server" containerID="cri-o://edc3b91ba9c93fdc8b8f4ab8405a9cde976a43eb0938c38a97b875a93e760b4c" gracePeriod=170 Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.971208 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.115025 4761 patch_prober.go:28] interesting pod/monitoring-plugin-67c8dd59f5-sbh4r container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.115079 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" podUID="08721f50-8882-42b0-9370-cbe4508753d3" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.115176 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.276020 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.276121 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.276370 4761 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-9vsj5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.276553 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" podUID="0868ef7f-3f74-41e3-bc81-8cf20dc88c43" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.276615 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.277035 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.277156 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.277500 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"e7a31476ed16910c418acc007c2aa3c105b4cbff3c4f9bf9818a1f397f67cf49"} pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" containerMessage="Container manager failed liveness probe, will be restarted" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.277533 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" containerID="cri-o://e7a31476ed16910c418acc007c2aa3c105b4cbff3c4f9bf9818a1f397f67cf49" gracePeriod=10 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.278023 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"3aae9d1949f29f8a7ae6aa2ba7150cd8e12626138a303387879fde766ec3acca"} pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.278072 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" podUID="0868ef7f-3f74-41e3-bc81-8cf20dc88c43" containerName="authentication-operator" containerID="cri-o://3aae9d1949f29f8a7ae6aa2ba7150cd8e12626138a303387879fde766ec3acca" gracePeriod=30 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.307898 4761 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-d62lh container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.308345 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" podUID="6092a906-c0c5-4dcd-bb59-a9ea6a3f2745" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.308431 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.449139 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.449250 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.449330 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.454456 4761 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-f9kfv container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.454825 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" podUID="c0d9aa49-bf5e-4663-9523-a67b07e95721" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.455559 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": context deadline exceeded" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.455650 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": context deadline exceeded" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.456903 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.457062 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.460324 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"d2ae8f588889d280d8dd782663b71192c29cd20c81435bd1b5054bad8dc9b285"} pod="openshift-console-operator/console-operator-58897d9998-6qsbw" containerMessage="Container console-operator failed liveness probe, will be restarted" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.460562 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" containerID="cri-o://d2ae8f588889d280d8dd782663b71192c29cd20c81435bd1b5054bad8dc9b285" gracePeriod=30 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.493603 4761 generic.go:334] "Generic (PLEG): container finished" podID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerID="c71116e6a3a9f6c29260500b75d3ed61bd2dbad7aa34d0bdfa582d386bd356a0" exitCode=1 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.493675 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" event={"ID":"7d43dfb0-643f-4e45-8e27-42b96b2c5ff9","Type":"ContainerDied","Data":"c71116e6a3a9f6c29260500b75d3ed61bd2dbad7aa34d0bdfa582d386bd356a0"} Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.495035 4761 scope.go:117] "RemoveContainer" containerID="c71116e6a3a9f6c29260500b75d3ed61bd2dbad7aa34d0bdfa582d386bd356a0" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.497641 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" event={"ID":"8a7603da-0d59-431b-82c9-59c887e9f8d6","Type":"ContainerStarted","Data":"d28a062251b89acb55b16ddac84fb5db54b70a511dbf5513f9f952f653963939"} Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.497706 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.499666 4761 generic.go:334] "Generic (PLEG): container finished" podID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerID="135b390898a3a827582358761059bcc63765210ef6cca72bb8fb26dcfd8484b3" exitCode=0 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.499733 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" event={"ID":"d29980e5-d546-4d88-9ff3-1ee39ddda37c","Type":"ContainerDied","Data":"135b390898a3a827582358761059bcc63765210ef6cca72bb8fb26dcfd8484b3"} Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.502051 4761 generic.go:334] "Generic (PLEG): container finished" podID="4c23f924-b431-4a3e-819b-713e132885f4" containerID="63c84d3254baf5d95dfdfc00082aac6d5f37aea286b07b39f9aa5191bea283bc" exitCode=1 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.502121 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" event={"ID":"4c23f924-b431-4a3e-819b-713e132885f4","Type":"ContainerDied","Data":"63c84d3254baf5d95dfdfc00082aac6d5f37aea286b07b39f9aa5191bea283bc"} Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.502569 4761 scope.go:117] "RemoveContainer" containerID="63c84d3254baf5d95dfdfc00082aac6d5f37aea286b07b39f9aa5191bea283bc" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.506290 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" event={"ID":"3dc06a77-85c3-42a9-a972-c3f33e46df4b","Type":"ContainerStarted","Data":"5ad83e343ac2a64cb8e7e00813ad64b3bd93fdd72fb2a003a93ea11b999c7d89"} Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.506613 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.514691 4761 generic.go:334] "Generic (PLEG): container finished" podID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerID="ff300f5efb12f93334587e3d904527e3fa66d7acbf968e46eb5467420491f1c9" exitCode=143 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.514748 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerDied","Data":"ff300f5efb12f93334587e3d904527e3fa66d7acbf968e46eb5467420491f1c9"} Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.547008 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.547061 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.547106 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.548158 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"0c6fac619c77e2e5bbca7ba4216168dfb98fbe2c07537854abdd01da802bb57c"} pod="openshift-console/downloads-7954f5f757-2lhb8" containerMessage="Container download-server failed liveness probe, will be restarted" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.548187 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" containerID="cri-o://0c6fac619c77e2e5bbca7ba4216168dfb98fbe2c07537854abdd01da802bb57c" gracePeriod=2 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.589228 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.589332 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.589454 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.589670 4761 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-pvm88 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.589779 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" podUID="22aee2b0-8c5f-486a-b74f-51b6452c7f8c" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.589914 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.805547 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.805705 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.805890 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.805988 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.806134 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.806173 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.807849 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"5fcc2c691603f4627e78e1eaff03f53c949e513e4a726f449c6bcc6c90c6849a"} pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.954566 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.954656 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.954761 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.955138 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.955316 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.955375 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.957430 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"013056b0040015113182560c14699f07344cb8e1128183fb51d69460d98786f5"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" containerMessage="Container packageserver failed liveness probe, will be restarted" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.957493 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" containerID="cri-o://013056b0040015113182560c14699f07344cb8e1128183fb51d69460d98786f5" gracePeriod=30 Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.047902 4761 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-52lfx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.047909 4761 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-52lfx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.047977 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.048015 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.048076 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.048103 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.049676 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"6801f5f398f60f8cca3cb48f6bcfe174267a879c24b0e6d47d8d9eb908cb3029"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.049741 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" containerID="cri-o://6801f5f398f60f8cca3cb48f6bcfe174267a879c24b0e6d47d8d9eb908cb3029" gracePeriod=30 Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.116413 4761 patch_prober.go:28] interesting pod/monitoring-plugin-67c8dd59f5-sbh4r container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.116484 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" podUID="08721f50-8882-42b0-9370-cbe4508753d3" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.195531 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.195969 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.195617 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.196020 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.196043 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.196143 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.198415 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"c24e0c02f1354d47f13663f7ab1c31413ac24da72c1da481aa037102e80c6c72"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" containerMessage="Container olm-operator failed liveness probe, will be restarted" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.198501 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" containerID="cri-o://c24e0c02f1354d47f13663f7ab1c31413ac24da72c1da481aa037102e80c6c72" gracePeriod=30 Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.307972 4761 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-d62lh container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.308051 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" podUID="6092a906-c0c5-4dcd-bb59-a9ea6a3f2745" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.308754 4761 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-d62lh container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.308819 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" podUID="6092a906-c0c5-4dcd-bb59-a9ea6a3f2745" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.317877 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.317916 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.317877 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.333961 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.334060 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: E0307 09:16:27.412399 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": context deadline exceeded" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.449940 4761 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-f9kfv container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.450004 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" podUID="c0d9aa49-bf5e-4663-9523-a67b07e95721" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.455308 4761 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.455377 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="133e9b5e-adcc-4dd6-b762-fc29c779b70a" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.455463 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.458212 4761 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-f9kfv container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.458250 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" podUID="c0d9aa49-bf5e-4663-9523-a67b07e95721" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.499241 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.499742 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.509047 4761 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.509116 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="ed3dc6dd-e534-41c2-b652-4aa0714797a0" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.509199 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.538731 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" event={"ID":"d29980e5-d546-4d88-9ff3-1ee39ddda37c","Type":"ContainerStarted","Data":"d1acac33e6040a801351f539764494247be1870c76a985618c74939192b21aee"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.539145 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.539275 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.539318 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.542571 4761 generic.go:334] "Generic (PLEG): container finished" podID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" containerID="fecc6f8d9fdaa78ed2073c88875eb1bbd5196a4b0d106a15e79e884489484194" exitCode=1 Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.542678 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" event={"ID":"0a9a2953-a51f-42b6-8ff8-d3f860ff6377","Type":"ContainerDied","Data":"fecc6f8d9fdaa78ed2073c88875eb1bbd5196a4b0d106a15e79e884489484194"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.543624 4761 scope.go:117] "RemoveContainer" containerID="fecc6f8d9fdaa78ed2073c88875eb1bbd5196a4b0d106a15e79e884489484194" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.558813 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"6e32adaada27f659b00976df80ca3e518d3568608a52ee2c4e94ed1aa89a2569"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.558863 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"e27c10c23e5db36c08e3dff05d4efabef5152b0c3f14c7f5ee992e24ce5e694c"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.558968 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lzrcd" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.563746 4761 generic.go:334] "Generic (PLEG): container finished" podID="193543ae-839d-485e-a238-ae40e69f7b24" containerID="728c9d850e2981887e404b1c4d33ab7b98374d312289c774f73b86896ee865e6" exitCode=137 Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.563801 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-75b4z" event={"ID":"193543ae-839d-485e-a238-ae40e69f7b24","Type":"ContainerDied","Data":"728c9d850e2981887e404b1c4d33ab7b98374d312289c774f73b86896ee865e6"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.563849 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-75b4z" event={"ID":"193543ae-839d-485e-a238-ae40e69f7b24","Type":"ContainerStarted","Data":"c2c682728920db7425b25796507bb7ca8861158ceea6700f682ad3d94b22d7bb"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.563926 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-75b4z" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.566557 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-m2tp4" event={"ID":"adfa916b-8977-446f-9387-932788e51e10","Type":"ContainerStarted","Data":"925901d509240dc83e547d47f0f9789fce3ab08eabf028603fabb6d31cd086e0"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.566692 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.569387 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" event={"ID":"7d43dfb0-643f-4e45-8e27-42b96b2c5ff9","Type":"ContainerStarted","Data":"a546f614e5c4bd568263e3743d3cf6a4cbe09dc627ead4b75c87520993ea7072"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.569619 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.571391 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.572093 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.572550 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b73830fee08ceedd2a68ad85e578f362420d9f46dec100ede362bc90822d2ff9"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.575920 4761 generic.go:334] "Generic (PLEG): container finished" podID="9dcfc7f8-35e7-4fab-bb7a-c900caf10641" containerID="41d06dd5c60f0f837fe1ccdb037076ad6bdd8e6892d4a4275ed07e5197267ea1" exitCode=1 Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.575996 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" event={"ID":"9dcfc7f8-35e7-4fab-bb7a-c900caf10641","Type":"ContainerDied","Data":"41d06dd5c60f0f837fe1ccdb037076ad6bdd8e6892d4a4275ed07e5197267ea1"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.577576 4761 scope.go:117] "RemoveContainer" containerID="41d06dd5c60f0f837fe1ccdb037076ad6bdd8e6892d4a4275ed07e5197267ea1" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.578105 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" event={"ID":"4c23f924-b431-4a3e-819b-713e132885f4","Type":"ContainerStarted","Data":"81d834133e64abcd31060159f75812cfb75e8f3e3a69aaa33cfc51f0da7067be"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.578419 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.588603 4761 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-pvm88 container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.588658 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" podUID="22aee2b0-8c5f-486a-b74f-51b6452c7f8c" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.590060 4761 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.590089 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="2d390fba-d423-4b88-90b2-0b291fe8e35b" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.590436 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.590475 4761 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-pvm88 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.590557 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" podUID="22aee2b0-8c5f-486a-b74f-51b6452c7f8c" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.737985 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.740220 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.743488 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.802353 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.802434 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.802445 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.802486 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.804164 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.847927 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.848347 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.848943 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.950583 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.955356 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.955396 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.090908 4761 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-52lfx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.090986 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.594079 4761 generic.go:334] "Generic (PLEG): container finished" podID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerID="e7a31476ed16910c418acc007c2aa3c105b4cbff3c4f9bf9818a1f397f67cf49" exitCode=0 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.594808 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" event={"ID":"6a6b6075-ec04-418f-ba28-09f11f19b78e","Type":"ContainerDied","Data":"e7a31476ed16910c418acc007c2aa3c105b4cbff3c4f9bf9818a1f397f67cf49"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.596616 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" event={"ID":"0a9a2953-a51f-42b6-8ff8-d3f860ff6377","Type":"ContainerStarted","Data":"58528443a5a2e40be5760c0862b6d4366c5301e088154e8cb8a832ac78333a15"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.597305 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.600896 4761 generic.go:334] "Generic (PLEG): container finished" podID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerID="0c6fac619c77e2e5bbca7ba4216168dfb98fbe2c07537854abdd01da802bb57c" exitCode=0 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.601020 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2lhb8" event={"ID":"55412b4c-53c7-4b21-8d7c-87879ef79ed0","Type":"ContainerDied","Data":"0c6fac619c77e2e5bbca7ba4216168dfb98fbe2c07537854abdd01da802bb57c"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.604563 4761 generic.go:334] "Generic (PLEG): container finished" podID="0bfdda94-7f9c-45d0-897f-0b65cf16e0fd" containerID="7a00a99d23bb211019e154c1e72bda4dd47906c6798accb6dc115db1c493b1ee" exitCode=1 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.604645 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" event={"ID":"0bfdda94-7f9c-45d0-897f-0b65cf16e0fd","Type":"ContainerDied","Data":"7a00a99d23bb211019e154c1e72bda4dd47906c6798accb6dc115db1c493b1ee"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.605801 4761 scope.go:117] "RemoveContainer" containerID="7a00a99d23bb211019e154c1e72bda4dd47906c6798accb6dc115db1c493b1ee" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.609638 4761 generic.go:334] "Generic (PLEG): container finished" podID="90a2f442-aea1-44ac-bbb8-ba58c0969806" containerID="b587059742bc05264204142dc6e664c7e30176cfb85d713077ea31e1ee3d15eb" exitCode=1 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.609709 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" event={"ID":"90a2f442-aea1-44ac-bbb8-ba58c0969806","Type":"ContainerDied","Data":"b587059742bc05264204142dc6e664c7e30176cfb85d713077ea31e1ee3d15eb"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.610664 4761 scope.go:117] "RemoveContainer" containerID="b587059742bc05264204142dc6e664c7e30176cfb85d713077ea31e1ee3d15eb" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.616120 4761 generic.go:334] "Generic (PLEG): container finished" podID="0febfb54-7188-4247-8d9b-2f166bf597ee" containerID="4cdfb31dc409d7b321c810a425510e042a6a930609002a81bab54dc73c830f3c" exitCode=1 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.616203 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" event={"ID":"0febfb54-7188-4247-8d9b-2f166bf597ee","Type":"ContainerDied","Data":"4cdfb31dc409d7b321c810a425510e042a6a930609002a81bab54dc73c830f3c"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.617050 4761 scope.go:117] "RemoveContainer" containerID="4cdfb31dc409d7b321c810a425510e042a6a930609002a81bab54dc73c830f3c" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.625978 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" event={"ID":"9dcfc7f8-35e7-4fab-bb7a-c900caf10641","Type":"ContainerStarted","Data":"fcfeaac218192a8908ae1155309ba465a0ba2bc1e275908ad492ccaa025fab99"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.628867 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.660293 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-6qsbw_d704dc9c-9c1f-4f45-8438-34eda153e3b5/console-operator/0.log" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.660343 4761 generic.go:334] "Generic (PLEG): container finished" podID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerID="d2ae8f588889d280d8dd782663b71192c29cd20c81435bd1b5054bad8dc9b285" exitCode=1 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.660441 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" event={"ID":"d704dc9c-9c1f-4f45-8438-34eda153e3b5","Type":"ContainerDied","Data":"d2ae8f588889d280d8dd782663b71192c29cd20c81435bd1b5054bad8dc9b285"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.670408 4761 generic.go:334] "Generic (PLEG): container finished" podID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerID="c24e0c02f1354d47f13663f7ab1c31413ac24da72c1da481aa037102e80c6c72" exitCode=0 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.670465 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" event={"ID":"0ea66074-912c-4797-b4a5-cfd5b8927d2e","Type":"ContainerDied","Data":"c24e0c02f1354d47f13663f7ab1c31413ac24da72c1da481aa037102e80c6c72"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.683827 4761 generic.go:334] "Generic (PLEG): container finished" podID="9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e" containerID="372765249b68815b4ab28701485402ebab9e58d438ea057d02618ffbc90dceda" exitCode=1 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.685182 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" event={"ID":"9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e","Type":"ContainerDied","Data":"372765249b68815b4ab28701485402ebab9e58d438ea057d02618ffbc90dceda"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.686374 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" start-of-body= Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.686411 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.693221 4761 scope.go:117] "RemoveContainer" containerID="372765249b68815b4ab28701485402ebab9e58d438ea057d02618ffbc90dceda" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.738766 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.804563 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.806056 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.806584 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.806691 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.808067 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"833704fdf8ae28e1b304b84c220a7f77b10ff62bbb503ec99590b5acc753c1c6"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.808216 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.014913 4761 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-vrchq container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.015290 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" podUID="fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.015404 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.048928 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" podUID="bd23eeaa-ed7e-45ea-9a40-613ac4e11120" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": EOF" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.049086 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" podUID="bd23eeaa-ed7e-45ea-9a40-613ac4e11120" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": EOF" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.049162 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.050088 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" podUID="bd23eeaa-ed7e-45ea-9a40-613ac4e11120" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": dial tcp 10.217.0.120:8081: connect: connection refused" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.357902 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" podUID="6bdda9de-4711-4fbc-b9d2-5f867691450a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.358132 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" podUID="6bdda9de-4711-4fbc-b9d2-5f867691450a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.358445 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 09:16:29 crc kubenswrapper[4761]: E0307 09:16:29.385571 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3667d397_4aef_4ee2_8571_8ee7c93c719b.slice/crio-conmon-a384d8e72abca7f94ea5bd0fc5a2b830afa37dc0ef04bd043411a226b34f720c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ce5a055_df90_4071_a5cf_f7361e01e5fe.slice/crio-a4a060c5792fba1d01323c32b54ee9777e8c17a5d0136180a80a26968b64713b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd23eeaa_ed7e_45ea_9a40_613ac4e11120.slice/crio-eecc7c748f3930aebdf862e90525d51e501de54c73b7adcd3a979cf5e63c2b7d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1abc2486_5f9c_4f0a_af63_365bcc4c1c61.slice/crio-d06988cae3b64334503e789d0e91e85389860e0eeeb5b563991a7021feb36127.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71ec20b6_ead9_496e_bd0d_97702212e64d.slice/crio-013056b0040015113182560c14699f07344cb8e1128183fb51d69460d98786f5.scope\": RecentStats: unable to find data in memory cache]" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.417244 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.520329 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.714802 4761 generic.go:334] "Generic (PLEG): container finished" podID="2db89b29-3889-4242-9ede-98140f3f8319" containerID="176b93f48fd4e95dc46217e4461d56b48c71dd75f37f939290d39f8488266098" exitCode=1 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.738899 4761 generic.go:334] "Generic (PLEG): container finished" podID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerID="013056b0040015113182560c14699f07344cb8e1128183fb51d69460d98786f5" exitCode=0 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.741072 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" event={"ID":"2db89b29-3889-4242-9ede-98140f3f8319","Type":"ContainerDied","Data":"176b93f48fd4e95dc46217e4461d56b48c71dd75f37f939290d39f8488266098"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.741144 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" event={"ID":"71ec20b6-ead9-496e-bd0d-97702212e64d","Type":"ContainerDied","Data":"013056b0040015113182560c14699f07344cb8e1128183fb51d69460d98786f5"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.747997 4761 scope.go:117] "RemoveContainer" containerID="176b93f48fd4e95dc46217e4461d56b48c71dd75f37f939290d39f8488266098" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.753607 4761 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1e4bb8a136f486751b2586b4b85b0c32cd97d2886348c6d4c4ba348f5d98a501" exitCode=0 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.753688 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1e4bb8a136f486751b2586b4b85b0c32cd97d2886348c6d4c4ba348f5d98a501"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.759105 4761 generic.go:334] "Generic (PLEG): container finished" podID="bf4af368-4dee-4a4a-8c43-fd7991ac3366" containerID="308c66f0f8ac1094f2ea1c132459e50a50223ab81c311c156411654064d3d522" exitCode=1 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.759257 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" event={"ID":"bf4af368-4dee-4a4a-8c43-fd7991ac3366","Type":"ContainerDied","Data":"308c66f0f8ac1094f2ea1c132459e50a50223ab81c311c156411654064d3d522"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.761693 4761 generic.go:334] "Generic (PLEG): container finished" podID="0ce5a055-df90-4071-a5cf-f7361e01e5fe" containerID="a4a060c5792fba1d01323c32b54ee9777e8c17a5d0136180a80a26968b64713b" exitCode=1 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.761749 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" event={"ID":"0ce5a055-df90-4071-a5cf-f7361e01e5fe","Type":"ContainerDied","Data":"a4a060c5792fba1d01323c32b54ee9777e8c17a5d0136180a80a26968b64713b"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.763167 4761 scope.go:117] "RemoveContainer" containerID="308c66f0f8ac1094f2ea1c132459e50a50223ab81c311c156411654064d3d522" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.763230 4761 scope.go:117] "RemoveContainer" containerID="a4a060c5792fba1d01323c32b54ee9777e8c17a5d0136180a80a26968b64713b" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.776212 4761 generic.go:334] "Generic (PLEG): container finished" podID="bd23eeaa-ed7e-45ea-9a40-613ac4e11120" containerID="eecc7c748f3930aebdf862e90525d51e501de54c73b7adcd3a979cf5e63c2b7d" exitCode=1 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.776264 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" event={"ID":"bd23eeaa-ed7e-45ea-9a40-613ac4e11120","Type":"ContainerDied","Data":"eecc7c748f3930aebdf862e90525d51e501de54c73b7adcd3a979cf5e63c2b7d"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.776784 4761 scope.go:117] "RemoveContainer" containerID="eecc7c748f3930aebdf862e90525d51e501de54c73b7adcd3a979cf5e63c2b7d" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.786897 4761 generic.go:334] "Generic (PLEG): container finished" podID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerID="a384d8e72abca7f94ea5bd0fc5a2b830afa37dc0ef04bd043411a226b34f720c" exitCode=0 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.786976 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" event={"ID":"3667d397-4aef-4ee2-8571-8ee7c93c719b","Type":"ContainerDied","Data":"a384d8e72abca7f94ea5bd0fc5a2b830afa37dc0ef04bd043411a226b34f720c"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.789542 4761 generic.go:334] "Generic (PLEG): container finished" podID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerID="d06988cae3b64334503e789d0e91e85389860e0eeeb5b563991a7021feb36127" exitCode=0 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.789595 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" event={"ID":"1abc2486-5f9c-4f0a-af63-365bcc4c1c61","Type":"ContainerDied","Data":"d06988cae3b64334503e789d0e91e85389860e0eeeb5b563991a7021feb36127"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.801796 4761 generic.go:334] "Generic (PLEG): container finished" podID="a4bc9370-c64d-4e5e-a0bd-70297abb8c0d" containerID="b9cd9bfba41b5d6d90b930c8b19de9899b287c8b027ef3788b2d6083a051a8b4" exitCode=1 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.801881 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" event={"ID":"a4bc9370-c64d-4e5e-a0bd-70297abb8c0d","Type":"ContainerDied","Data":"b9cd9bfba41b5d6d90b930c8b19de9899b287c8b027ef3788b2d6083a051a8b4"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.802837 4761 scope.go:117] "RemoveContainer" containerID="b9cd9bfba41b5d6d90b930c8b19de9899b287c8b027ef3788b2d6083a051a8b4" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.806379 4761 generic.go:334] "Generic (PLEG): container finished" podID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerID="b6193bd889ffe38e7587c3bf176f03324132a8fe93273085b2960f8bc71d2e62" exitCode=0 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.806489 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" event={"ID":"2b3bce52-2720-4999-bf2f-f6808cd3a5fe","Type":"ContainerDied","Data":"b6193bd889ffe38e7587c3bf176f03324132a8fe93273085b2960f8bc71d2e62"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.813259 4761 generic.go:334] "Generic (PLEG): container finished" podID="0868ef7f-3f74-41e3-bc81-8cf20dc88c43" containerID="3aae9d1949f29f8a7ae6aa2ba7150cd8e12626138a303387879fde766ec3acca" exitCode=0 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.813304 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" event={"ID":"0868ef7f-3f74-41e3-bc81-8cf20dc88c43","Type":"ContainerDied","Data":"3aae9d1949f29f8a7ae6aa2ba7150cd8e12626138a303387879fde766ec3acca"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.032955 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.033459 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.033499 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.437862 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.455452 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.455474 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.455765 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.455866 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.500120 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.500182 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.840331 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" event={"ID":"3667d397-4aef-4ee2-8571-8ee7c93c719b","Type":"ContainerStarted","Data":"fa459122658bb4081d1a7063ea2395a5b9e8ae55ff1029dbb906c069f0840131"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.841474 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.841874 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.841944 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.845451 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" event={"ID":"0ea66074-912c-4797-b4a5-cfd5b8927d2e","Type":"ContainerStarted","Data":"b77abb45592a7460e6fc5bb29ec8cb38f424c6995b56059f3d2ab292ca21dd93"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.845628 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.845923 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.845971 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.848875 4761 generic.go:334] "Generic (PLEG): container finished" podID="2bdde810-6429-4553-a9bb-1ccef1f89e2d" containerID="1443e56814c28961324049739b81f51a64652ab0da2dbb7afb348838a00f0e1f" exitCode=0 Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.848926 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bdde810-6429-4553-a9bb-1ccef1f89e2d","Type":"ContainerDied","Data":"1443e56814c28961324049739b81f51a64652ab0da2dbb7afb348838a00f0e1f"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.851145 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" event={"ID":"0febfb54-7188-4247-8d9b-2f166bf597ee","Type":"ContainerStarted","Data":"87389e51f33b4b75fa4dd2220a64dfe43e1b8a45bdb4b953dc2ef39286823149"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.851423 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.854661 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2lhb8" event={"ID":"55412b4c-53c7-4b21-8d7c-87879ef79ed0","Type":"ContainerStarted","Data":"8b0db96d53a1438df592d7e58c5152b835d75062cabd09fb54e269e169a11fd7"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.854859 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.855463 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.855509 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.862791 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" event={"ID":"0bfdda94-7f9c-45d0-897f-0b65cf16e0fd","Type":"ContainerStarted","Data":"78bb9239dab9d159c062f828d9fad1089704a2628c5b55354dfe22f8e40472bf"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.864034 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.869314 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" event={"ID":"2b3bce52-2720-4999-bf2f-f6808cd3a5fe","Type":"ContainerStarted","Data":"e87462f5094b033c817515f80e03aaa842701015dcce94174b8e82a74a78b3c1"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.870106 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.870189 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.870245 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.877931 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" event={"ID":"9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e","Type":"ContainerStarted","Data":"90835a345e01e477a0e7237f6b67e89ac99d381ca364b6962ae82c895448374b"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.878155 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.880617 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" event={"ID":"1abc2486-5f9c-4f0a-af63-365bcc4c1c61","Type":"ContainerStarted","Data":"4c3e6b560c73f7db1ff343cc61d8cb6a48fa128c96376d36aa2c83239440e0b9"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.881156 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.881225 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.881254 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.889351 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"81dab7e2717b36758f5c296f9f362459c6d2f5bd202853783e2eaf4e8d95090d"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.889674 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.906050 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" event={"ID":"90a2f442-aea1-44ac-bbb8-ba58c0969806","Type":"ContainerStarted","Data":"20f3c26709c1fbd3911d2572ebcd6b1f004f50d88b3ade4efc7cf617a84572df"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.906957 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.917968 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" event={"ID":"0868ef7f-3f74-41e3-bc81-8cf20dc88c43","Type":"ContainerStarted","Data":"4d06f8397c7786ff73a0467f43d7f01a14d4d8a62e94d7da0615ca36d9c85fea"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.936822 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" event={"ID":"6a6b6075-ec04-418f-ba28-09f11f19b78e","Type":"ContainerStarted","Data":"ca31dacea70953b490f81f01c1507f4c6a756789764361ec086fb413b67eb2ce"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.937802 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.949297 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-6qsbw_d704dc9c-9c1f-4f45-8438-34eda153e3b5/console-operator/0.log" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.949440 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" event={"ID":"d704dc9c-9c1f-4f45-8438-34eda153e3b5","Type":"ContainerStarted","Data":"1e26d0115083ca17ea018f4e2ae10a8275b622b75a1c3889b99449a6d8aff5f2"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.950772 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.950947 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.950993 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.955474 4761 generic.go:334] "Generic (PLEG): container finished" podID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerID="43fecf17bd70cc24f894f9981f36f699613214c657ae37df741e21de54a09dc3" exitCode=0 Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.955520 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" event={"ID":"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0","Type":"ContainerDied","Data":"43fecf17bd70cc24f894f9981f36f699613214c657ae37df741e21de54a09dc3"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.244556 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-lzrcd" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.425510 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-lzrcd" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.565479 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.633052 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" containerID="cri-o://833704fdf8ae28e1b304b84c220a7f77b10ff62bbb503ec99590b5acc753c1c6" gracePeriod=28 Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.697092 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" containerID="cri-o://bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618" gracePeriod=27 Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.966628 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" event={"ID":"bd23eeaa-ed7e-45ea-9a40-613ac4e11120","Type":"ContainerStarted","Data":"0390df4ef1c8c348fda493b13376d9a82d081c545d6638c9c270a99ba4013bf4"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.968131 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.969496 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" event={"ID":"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0","Type":"ContainerStarted","Data":"b3fdfe8002a3b92873a100111dfdfbb0cead78bf9dc3e27c52af24d09fd64af0"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.969770 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.972210 4761 generic.go:334] "Generic (PLEG): container finished" podID="25717bfc-51a4-4724-bbed-70d94a322755" containerID="6801f5f398f60f8cca3cb48f6bcfe174267a879c24b0e6d47d8d9eb908cb3029" exitCode=0 Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.972289 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" event={"ID":"25717bfc-51a4-4724-bbed-70d94a322755","Type":"ContainerDied","Data":"6801f5f398f60f8cca3cb48f6bcfe174267a879c24b0e6d47d8d9eb908cb3029"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.972337 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" event={"ID":"25717bfc-51a4-4724-bbed-70d94a322755","Type":"ContainerStarted","Data":"2d407237872dc6ce0cbc360d1a2dcb4b18e056a9f9b1c4c20365280159240248"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.972648 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.974233 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" event={"ID":"0ce5a055-df90-4071-a5cf-f7361e01e5fe","Type":"ContainerStarted","Data":"bc02c2e50db04a2327fc8abe7b045ac09ca8ebb9d343afaf1ace8d9416338f65"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.974459 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.976030 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" event={"ID":"2db89b29-3889-4242-9ede-98140f3f8319","Type":"ContainerStarted","Data":"e64dbdc648e3de38b5e2d65a91e7b33ea4e9a251ac0aca710108e26c53c2d46e"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.976188 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.977988 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" event={"ID":"71ec20b6-ead9-496e-bd0d-97702212e64d","Type":"ContainerStarted","Data":"99206ef12e2344ab7a7377a17b40d76c370a41f7cdbb720e4c4324b969e33e8b"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.979141 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.979229 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.979270 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.981273 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" event={"ID":"bf4af368-4dee-4a4a-8c43-fd7991ac3366","Type":"ContainerStarted","Data":"fba62eb5a6b1b5682be4b26569bffa0291f339cba06ca6e7c47e7b423ea6f040"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.982044 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.985961 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bdde810-6429-4553-a9bb-1ccef1f89e2d","Type":"ContainerStarted","Data":"9d75b044a39d59be7bcae8cb25f01356b546913e743d23fd829eb1958ca25090"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.991307 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" event={"ID":"a4bc9370-c64d-4e5e-a0bd-70297abb8c0d","Type":"ContainerStarted","Data":"7fb75364b040d920764c35e589a69dfa1c67c12489e9f89d92698fadcdf3247b"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.992062 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.992367 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" start-of-body= Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.992410 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.992992 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993019 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993082 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993096 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993223 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993276 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993262 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993316 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993309 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993360 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.330106 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.802758 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" start-of-body= Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.802756 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" start-of-body= Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.803133 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.803073 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.883847 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.883908 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.885705 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.885822 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.951146 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" podUID="353016f5-6859-4193-9845-69bf540c7ab3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": dial tcp 10.217.0.117:8081: connect: connection refused" Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.001752 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.001815 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.001909 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.001928 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.002253 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.002304 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.008470 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podUID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": dial tcp 10.217.0.119:8081: connect: connection refused" Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.170027 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.284084 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": dial tcp 10.217.0.124:8081: connect: connection refused" Mar 07 09:16:34 crc kubenswrapper[4761]: I0307 09:16:34.011173 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Mar 07 09:16:34 crc kubenswrapper[4761]: I0307 09:16:34.011549 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Mar 07 09:16:34 crc kubenswrapper[4761]: I0307 09:16:34.881293 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="69ab7bc1-753e-437c-bd70-130581863fde" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.052559 4761 generic.go:334] "Generic (PLEG): container finished" podID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerID="833704fdf8ae28e1b304b84c220a7f77b10ff62bbb503ec99590b5acc753c1c6" exitCode=0 Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.052645 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f0ccb6a-6367-409b-b996-4946fa2c8981","Type":"ContainerDied","Data":"833704fdf8ae28e1b304b84c220a7f77b10ff62bbb503ec99590b5acc753c1c6"} Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.194334 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": dial tcp 10.217.0.126:8081: connect: connection refused" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.315385 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.348159 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.446543 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.447705 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.446595 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.447806 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.459576 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.506864 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.507215 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.509264 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.509399 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.592068 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.809812 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.953100 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.953394 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.953104 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.953439 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.974469 4761 patch_prober.go:28] interesting pod/loki-operator-controller-manager-6d4c45cc-fmrsq container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.51:8081/readyz\": dial tcp 10.217.0.51:8081: connect: connection refused" start-of-body= Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.974740 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" podUID="8a7603da-0d59-431b-82c9-59c887e9f8d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.51:8081/readyz\": dial tcp 10.217.0.51:8081: connect: connection refused" Mar 07 09:16:36 crc kubenswrapper[4761]: E0307 09:16:36.058958 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 07 09:16:36 crc kubenswrapper[4761]: E0307 09:16:36.067949 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 07 09:16:36 crc kubenswrapper[4761]: E0307 09:16:36.069625 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 07 09:16:36 crc kubenswrapper[4761]: E0307 09:16:36.069764 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" Mar 07 09:16:36 crc kubenswrapper[4761]: I0307 09:16:36.070610 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f0ccb6a-6367-409b-b996-4946fa2c8981","Type":"ContainerStarted","Data":"bf60eb690e37593da1bb6c0f95222a7db564ef6079b3ebf7c4e3be8c4e0309ef"} Mar 07 09:16:36 crc kubenswrapper[4761]: I0307 09:16:36.223669 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 09:16:36 crc kubenswrapper[4761]: I0307 09:16:36.498651 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:36 crc kubenswrapper[4761]: I0307 09:16:36.498706 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:36 crc kubenswrapper[4761]: I0307 09:16:36.498770 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:36 crc kubenswrapper[4761]: I0307 09:16:36.498797 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:37 crc kubenswrapper[4761]: I0307 09:16:37.457448 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="69ab7bc1-753e-437c-bd70-130581863fde" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:16:37 crc kubenswrapper[4761]: I0307 09:16:37.734240 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 07 09:16:37 crc kubenswrapper[4761]: I0307 09:16:37.734290 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.076046 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j6zwc"] Mar 07 09:16:38 crc kubenswrapper[4761]: E0307 09:16:38.092183 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c04e80-73b0-4955-9310-90ae9b38fcc5" containerName="collect-profiles" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.092222 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c04e80-73b0-4955-9310-90ae9b38fcc5" containerName="collect-profiles" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.093290 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c04e80-73b0-4955-9310-90ae9b38fcc5" containerName="collect-profiles" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.100769 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547916-42c74"] Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.104739 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.104745 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547916-42c74" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.128133 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.129653 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.129743 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.138555 4761 generic.go:334] "Generic (PLEG): container finished" podID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerID="bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618" exitCode=0 Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.139791 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe","Type":"ContainerDied","Data":"bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618"} Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.204540 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547916-42c74"] Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.239762 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-catalog-content\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.240158 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x55fb\" (UniqueName: \"kubernetes.io/projected/39691e56-a95c-4f7c-827a-d88b17d628f4-kube-api-access-x55fb\") pod \"auto-csr-approver-29547916-42c74\" (UID: \"39691e56-a95c-4f7c-827a-d88b17d628f4\") " pod="openshift-infra/auto-csr-approver-29547916-42c74" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.240261 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-utilities\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.240493 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwwjt\" (UniqueName: \"kubernetes.io/projected/6f5bae71-535d-4369-941e-1602475cda35-kube-api-access-cwwjt\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.256806 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j6zwc"] Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.342559 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x55fb\" (UniqueName: \"kubernetes.io/projected/39691e56-a95c-4f7c-827a-d88b17d628f4-kube-api-access-x55fb\") pod \"auto-csr-approver-29547916-42c74\" (UID: \"39691e56-a95c-4f7c-827a-d88b17d628f4\") " pod="openshift-infra/auto-csr-approver-29547916-42c74" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.342626 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-utilities\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.342802 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwwjt\" (UniqueName: \"kubernetes.io/projected/6f5bae71-535d-4369-941e-1602475cda35-kube-api-access-cwwjt\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.342890 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-catalog-content\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.346750 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-catalog-content\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.357588 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-utilities\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.419219 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x55fb\" (UniqueName: \"kubernetes.io/projected/39691e56-a95c-4f7c-827a-d88b17d628f4-kube-api-access-x55fb\") pod \"auto-csr-approver-29547916-42c74\" (UID: \"39691e56-a95c-4f7c-827a-d88b17d628f4\") " pod="openshift-infra/auto-csr-approver-29547916-42c74" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.427832 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwwjt\" (UniqueName: \"kubernetes.io/projected/6f5bae71-535d-4369-941e-1602475cda35-kube-api-access-cwwjt\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.493393 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547916-42c74" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.510946 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.715247 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 09:16:39 crc kubenswrapper[4761]: I0307 09:16:39.505874 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.041377 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.047597 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.386189 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="69ab7bc1-753e-437c-bd70-130581863fde" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.386324 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.391949 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"d1593393ea8982a1ba24a2a7870fa9fc1e67f00e525f221c8b96901d677b86a6"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.392086 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="69ab7bc1-753e-437c-bd70-130581863fde" containerName="cinder-scheduler" containerID="cri-o://d1593393ea8982a1ba24a2a7870fa9fc1e67f00e525f221c8b96901d677b86a6" gracePeriod=30 Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.614476 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.958032 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 09:16:41 crc kubenswrapper[4761]: I0307 09:16:41.172646 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe","Type":"ContainerStarted","Data":"e4d1e41d1d0ca94bb361998f2018a9b40670407d7ce0b8df2cab50bee1c3bed7"} Mar 07 09:16:41 crc kubenswrapper[4761]: I0307 09:16:41.251581 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-lzrcd" Mar 07 09:16:41 crc kubenswrapper[4761]: I0307 09:16:41.372404 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 09:16:41 crc kubenswrapper[4761]: I0307 09:16:41.437667 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547916-42c74"] Mar 07 09:16:41 crc kubenswrapper[4761]: I0307 09:16:41.466498 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j6zwc"] Mar 07 09:16:42 crc kubenswrapper[4761]: I0307 09:16:42.190254 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547916-42c74" event={"ID":"39691e56-a95c-4f7c-827a-d88b17d628f4","Type":"ContainerStarted","Data":"0ecbbd844c657b743328e4ca73f9925f5650d3ff1bc39843d2e6dbbe8bc15eca"} Mar 07 09:16:42 crc kubenswrapper[4761]: I0307 09:16:42.195556 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerStarted","Data":"59e54b39f5b26ef3510c70a43e8f0e62f502310a1c9a43edf87d39a64d11d5c3"} Mar 07 09:16:42 crc kubenswrapper[4761]: I0307 09:16:42.195592 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerStarted","Data":"9a27f562b02b2754885ebcf54aa377df53929dd08475e16c9c6a548f2b4e320e"} Mar 07 09:16:42 crc kubenswrapper[4761]: I0307 09:16:42.252690 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 09:16:42 crc kubenswrapper[4761]: I0307 09:16:42.309363 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 09:16:42 crc kubenswrapper[4761]: I0307 09:16:42.323412 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.062515 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.062552 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.062571 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.062589 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.064347 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.071532 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.073380 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.074398 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.074529 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.116518 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.153655 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-75b4z" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.207008 4761 generic.go:334] "Generic (PLEG): container finished" podID="6f5bae71-535d-4369-941e-1602475cda35" containerID="59e54b39f5b26ef3510c70a43e8f0e62f502310a1c9a43edf87d39a64d11d5c3" exitCode=0 Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.207637 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerDied","Data":"59e54b39f5b26ef3510c70a43e8f0e62f502310a1c9a43edf87d39a64d11d5c3"} Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.288296 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.358452 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.216108 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.233083 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerStarted","Data":"c54c0c0f2e512a5f9297e05077e975dc9094225cfea400394ce516275a224228"} Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.453478 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.506922 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.506978 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.508889 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.508926 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.967015 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.985451 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 09:16:46 crc kubenswrapper[4761]: I0307 09:16:46.054863 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 07 09:16:46 crc kubenswrapper[4761]: I0307 09:16:46.055047 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 07 09:16:46 crc kubenswrapper[4761]: I0307 09:16:46.253700 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547916-42c74" event={"ID":"39691e56-a95c-4f7c-827a-d88b17d628f4","Type":"ContainerStarted","Data":"81cfe2e925e6f21f93bd135229819b95131acd12552e3ecc6934b0edcd5d0a68"} Mar 07 09:16:46 crc kubenswrapper[4761]: I0307 09:16:46.275432 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547916-42c74" podStartSLOduration=43.236493869 podStartE2EDuration="45.273432027s" podCreationTimestamp="2026-03-07 09:16:01 +0000 UTC" firstStartedPulling="2026-03-07 09:16:41.453709487 +0000 UTC m=+5258.362875962" lastFinishedPulling="2026-03-07 09:16:43.490647645 +0000 UTC m=+5260.399814120" observedRunningTime="2026-03-07 09:16:46.268810783 +0000 UTC m=+5263.177977258" watchObservedRunningTime="2026-03-07 09:16:46.273432027 +0000 UTC m=+5263.182598502" Mar 07 09:16:50 crc kubenswrapper[4761]: I0307 09:16:50.311439 4761 generic.go:334] "Generic (PLEG): container finished" podID="69ab7bc1-753e-437c-bd70-130581863fde" containerID="d1593393ea8982a1ba24a2a7870fa9fc1e67f00e525f221c8b96901d677b86a6" exitCode=0 Mar 07 09:16:50 crc kubenswrapper[4761]: I0307 09:16:50.311559 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"69ab7bc1-753e-437c-bd70-130581863fde","Type":"ContainerDied","Data":"d1593393ea8982a1ba24a2a7870fa9fc1e67f00e525f221c8b96901d677b86a6"} Mar 07 09:16:50 crc kubenswrapper[4761]: I0307 09:16:50.314687 4761 generic.go:334] "Generic (PLEG): container finished" podID="39691e56-a95c-4f7c-827a-d88b17d628f4" containerID="81cfe2e925e6f21f93bd135229819b95131acd12552e3ecc6934b0edcd5d0a68" exitCode=0 Mar 07 09:16:50 crc kubenswrapper[4761]: I0307 09:16:50.314761 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547916-42c74" event={"ID":"39691e56-a95c-4f7c-827a-d88b17d628f4","Type":"ContainerDied","Data":"81cfe2e925e6f21f93bd135229819b95131acd12552e3ecc6934b0edcd5d0a68"} Mar 07 09:16:50 crc kubenswrapper[4761]: I0307 09:16:50.317460 4761 generic.go:334] "Generic (PLEG): container finished" podID="6f5bae71-535d-4369-941e-1602475cda35" containerID="c54c0c0f2e512a5f9297e05077e975dc9094225cfea400394ce516275a224228" exitCode=0 Mar 07 09:16:50 crc kubenswrapper[4761]: I0307 09:16:50.317499 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerDied","Data":"c54c0c0f2e512a5f9297e05077e975dc9094225cfea400394ce516275a224228"} Mar 07 09:16:51 crc kubenswrapper[4761]: I0307 09:16:51.960142 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547916-42c74" Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.038611 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x55fb\" (UniqueName: \"kubernetes.io/projected/39691e56-a95c-4f7c-827a-d88b17d628f4-kube-api-access-x55fb\") pod \"39691e56-a95c-4f7c-827a-d88b17d628f4\" (UID: \"39691e56-a95c-4f7c-827a-d88b17d628f4\") " Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.065312 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39691e56-a95c-4f7c-827a-d88b17d628f4-kube-api-access-x55fb" (OuterVolumeSpecName: "kube-api-access-x55fb") pod "39691e56-a95c-4f7c-827a-d88b17d628f4" (UID: "39691e56-a95c-4f7c-827a-d88b17d628f4"). InnerVolumeSpecName "kube-api-access-x55fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.140345 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" containerID="cri-o://5fcc2c691603f4627e78e1eaff03f53c949e513e4a726f449c6bcc6c90c6849a" gracePeriod=15 Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.142228 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x55fb\" (UniqueName: \"kubernetes.io/projected/39691e56-a95c-4f7c-827a-d88b17d628f4-kube-api-access-x55fb\") on node \"crc\" DevicePath \"\"" Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.345357 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547916-42c74" Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.345353 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547916-42c74" event={"ID":"39691e56-a95c-4f7c-827a-d88b17d628f4","Type":"ContainerDied","Data":"0ecbbd844c657b743328e4ca73f9925f5650d3ff1bc39843d2e6dbbe8bc15eca"} Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.346270 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ecbbd844c657b743328e4ca73f9925f5650d3ff1bc39843d2e6dbbe8bc15eca" Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.347169 4761 generic.go:334] "Generic (PLEG): container finished" podID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerID="5fcc2c691603f4627e78e1eaff03f53c949e513e4a726f449c6bcc6c90c6849a" exitCode=0 Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.347258 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" event={"ID":"e91a422d-2255-4769-8a0e-6eb6f8b93eed","Type":"ContainerDied","Data":"5fcc2c691603f4627e78e1eaff03f53c949e513e4a726f449c6bcc6c90c6849a"} Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.349876 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerStarted","Data":"02fdef28d190fca9ed709889b337f4c9d649c800f34702df5f5aceb94bb5b963"} Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.382280 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j6zwc" podStartSLOduration=12.602874433 podStartE2EDuration="21.382260982s" podCreationTimestamp="2026-03-07 09:16:31 +0000 UTC" firstStartedPulling="2026-03-07 09:16:42.196213699 +0000 UTC m=+5259.105380184" lastFinishedPulling="2026-03-07 09:16:50.975600258 +0000 UTC m=+5267.884766733" observedRunningTime="2026-03-07 09:16:52.372688635 +0000 UTC m=+5269.281855120" watchObservedRunningTime="2026-03-07 09:16:52.382260982 +0000 UTC m=+5269.291427457" Mar 07 09:16:54 crc kubenswrapper[4761]: I0307 09:16:54.376117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"69ab7bc1-753e-437c-bd70-130581863fde","Type":"ContainerStarted","Data":"ae60ad08c65b47a1b96e731ab004738580cd4109ad71d884480879e18226f7c5"} Mar 07 09:16:54 crc kubenswrapper[4761]: I0307 09:16:54.391657 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" event={"ID":"e91a422d-2255-4769-8a0e-6eb6f8b93eed","Type":"ContainerStarted","Data":"0a713e6d10995d151aa2e3c4c911c9b9370171917a98ba59e072b2890148f066"} Mar 07 09:16:54 crc kubenswrapper[4761]: I0307 09:16:54.393401 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 09:16:54 crc kubenswrapper[4761]: I0307 09:16:54.393490 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": dial tcp 10.217.0.64:6443: connect: connection refused" start-of-body= Mar 07 09:16:54 crc kubenswrapper[4761]: I0307 09:16:54.393524 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": dial tcp 10.217.0.64:6443: connect: connection refused" Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.403502 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": dial tcp 10.217.0.64:6443: connect: connection refused" start-of-body= Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.404044 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": dial tcp 10.217.0.64:6443: connect: connection refused" Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.508844 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.508898 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.508943 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.510187 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"8b0db96d53a1438df592d7e58c5152b835d75062cabd09fb54e269e169a11fd7"} pod="openshift-console/downloads-7954f5f757-2lhb8" containerMessage="Container download-server failed liveness probe, will be restarted" Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.510226 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" containerID="cri-o://8b0db96d53a1438df592d7e58c5152b835d75062cabd09fb54e269e169a11fd7" gracePeriod=2 Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.510629 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.510654 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.510836 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.510856 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.139916 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.417706 4761 generic.go:334] "Generic (PLEG): container finished" podID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerID="8b0db96d53a1438df592d7e58c5152b835d75062cabd09fb54e269e169a11fd7" exitCode=0 Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.417752 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2lhb8" event={"ID":"55412b4c-53c7-4b21-8d7c-87879ef79ed0","Type":"ContainerDied","Data":"8b0db96d53a1438df592d7e58c5152b835d75062cabd09fb54e269e169a11fd7"} Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.417805 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2lhb8" event={"ID":"55412b4c-53c7-4b21-8d7c-87879ef79ed0","Type":"ContainerStarted","Data":"553f6f8cd27aaa45ed5f5933b324778be0d7aebd4845bf3a96a21bf20586ace5"} Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.418535 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.418940 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.418986 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.421936 4761 scope.go:117] "RemoveContainer" containerID="0c6fac619c77e2e5bbca7ba4216168dfb98fbe2c07537854abdd01da802bb57c" Mar 07 09:16:57 crc kubenswrapper[4761]: I0307 09:16:57.429392 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:57 crc kubenswrapper[4761]: I0307 09:16:57.429797 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:58 crc kubenswrapper[4761]: I0307 09:16:58.341446 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 07 09:16:58 crc kubenswrapper[4761]: I0307 09:16:58.518328 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:58 crc kubenswrapper[4761]: I0307 09:16:58.518401 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:59 crc kubenswrapper[4761]: I0307 09:16:59.579177 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-j6zwc" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="registry-server" probeResult="failure" output=< Mar 07 09:16:59 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:16:59 crc kubenswrapper[4761]: > Mar 07 09:17:00 crc kubenswrapper[4761]: I0307 09:17:00.191143 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 09:17:00 crc kubenswrapper[4761]: I0307 09:17:00.321708 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547910-tp9pj"] Mar 07 09:17:00 crc kubenswrapper[4761]: I0307 09:17:00.341782 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547910-tp9pj"] Mar 07 09:17:01 crc kubenswrapper[4761]: I0307 09:17:01.721800 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77c8bd54-9347-4e87-bd44-76913cb2a3f6" path="/var/lib/kubelet/pods/77c8bd54-9347-4e87-bd44-76913cb2a3f6/volumes" Mar 07 09:17:03 crc kubenswrapper[4761]: I0307 09:17:03.364062 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="69ab7bc1-753e-437c-bd70-130581863fde" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:17:05 crc kubenswrapper[4761]: I0307 09:17:05.508207 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:17:05 crc kubenswrapper[4761]: I0307 09:17:05.508549 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:17:05 crc kubenswrapper[4761]: I0307 09:17:05.508829 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:17:05 crc kubenswrapper[4761]: I0307 09:17:05.508889 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:17:05 crc kubenswrapper[4761]: I0307 09:17:05.992946 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 09:17:08 crc kubenswrapper[4761]: I0307 09:17:08.367506 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="69ab7bc1-753e-437c-bd70-130581863fde" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:17:09 crc kubenswrapper[4761]: I0307 09:17:09.559320 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-j6zwc" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="registry-server" probeResult="failure" output=< Mar 07 09:17:09 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:17:09 crc kubenswrapper[4761]: > Mar 07 09:17:13 crc kubenswrapper[4761]: I0307 09:17:13.368817 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="69ab7bc1-753e-437c-bd70-130581863fde" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:17:14 crc kubenswrapper[4761]: I0307 09:17:14.869647 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 07 09:17:15 crc kubenswrapper[4761]: I0307 09:17:15.021787 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 07 09:17:15 crc kubenswrapper[4761]: I0307 09:17:15.525120 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 09:17:15 crc kubenswrapper[4761]: I0307 09:17:15.804095 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 07 09:17:15 crc kubenswrapper[4761]: I0307 09:17:15.911498 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 07 09:17:18 crc kubenswrapper[4761]: I0307 09:17:18.383676 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 07 09:17:19 crc kubenswrapper[4761]: I0307 09:17:19.573395 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-j6zwc" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="registry-server" probeResult="failure" output=< Mar 07 09:17:19 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:17:19 crc kubenswrapper[4761]: > Mar 07 09:17:22 crc kubenswrapper[4761]: I0307 09:17:22.642964 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 09:17:24 crc kubenswrapper[4761]: I0307 09:17:24.813563 4761 scope.go:117] "RemoveContainer" containerID="258a09f7752f77736f7c79cc137d75713f6d9a437375f7b59cbe63159e498518" Mar 07 09:17:28 crc kubenswrapper[4761]: I0307 09:17:28.614946 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:17:28 crc kubenswrapper[4761]: I0307 09:17:28.672585 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:17:28 crc kubenswrapper[4761]: I0307 09:17:28.855967 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j6zwc"] Mar 07 09:17:29 crc kubenswrapper[4761]: I0307 09:17:29.859894 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j6zwc" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="registry-server" containerID="cri-o://02fdef28d190fca9ed709889b337f4c9d649c800f34702df5f5aceb94bb5b963" gracePeriod=2 Mar 07 09:17:30 crc kubenswrapper[4761]: I0307 09:17:30.859576 4761 generic.go:334] "Generic (PLEG): container finished" podID="6f5bae71-535d-4369-941e-1602475cda35" containerID="02fdef28d190fca9ed709889b337f4c9d649c800f34702df5f5aceb94bb5b963" exitCode=0 Mar 07 09:17:30 crc kubenswrapper[4761]: I0307 09:17:30.860182 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerDied","Data":"02fdef28d190fca9ed709889b337f4c9d649c800f34702df5f5aceb94bb5b963"} Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.195606 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.305222 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-utilities\") pod \"6f5bae71-535d-4369-941e-1602475cda35\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.306003 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-catalog-content\") pod \"6f5bae71-535d-4369-941e-1602475cda35\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.306068 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwwjt\" (UniqueName: \"kubernetes.io/projected/6f5bae71-535d-4369-941e-1602475cda35-kube-api-access-cwwjt\") pod \"6f5bae71-535d-4369-941e-1602475cda35\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.309996 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-utilities" (OuterVolumeSpecName: "utilities") pod "6f5bae71-535d-4369-941e-1602475cda35" (UID: "6f5bae71-535d-4369-941e-1602475cda35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.324928 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5bae71-535d-4369-941e-1602475cda35-kube-api-access-cwwjt" (OuterVolumeSpecName: "kube-api-access-cwwjt") pod "6f5bae71-535d-4369-941e-1602475cda35" (UID: "6f5bae71-535d-4369-941e-1602475cda35"). InnerVolumeSpecName "kube-api-access-cwwjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.409644 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.409679 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwwjt\" (UniqueName: \"kubernetes.io/projected/6f5bae71-535d-4369-941e-1602475cda35-kube-api-access-cwwjt\") on node \"crc\" DevicePath \"\"" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.433185 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f5bae71-535d-4369-941e-1602475cda35" (UID: "6f5bae71-535d-4369-941e-1602475cda35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.511473 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.874578 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerDied","Data":"9a27f562b02b2754885ebcf54aa377df53929dd08475e16c9c6a548f2b4e320e"} Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.874659 4761 scope.go:117] "RemoveContainer" containerID="02fdef28d190fca9ed709889b337f4c9d649c800f34702df5f5aceb94bb5b963" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.874689 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.904895 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j6zwc"] Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.913957 4761 scope.go:117] "RemoveContainer" containerID="c54c0c0f2e512a5f9297e05077e975dc9094225cfea400394ce516275a224228" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.918051 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j6zwc"] Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.937333 4761 scope.go:117] "RemoveContainer" containerID="59e54b39f5b26ef3510c70a43e8f0e62f502310a1c9a43edf87d39a64d11d5c3" Mar 07 09:17:33 crc kubenswrapper[4761]: I0307 09:17:33.733093 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f5bae71-535d-4369-941e-1602475cda35" path="/var/lib/kubelet/pods/6f5bae71-535d-4369-941e-1602475cda35/volumes" Mar 07 09:17:43 crc kubenswrapper[4761]: I0307 09:17:43.768072 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:17:43 crc kubenswrapper[4761]: I0307 09:17:43.768513 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.351048 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547918-24jnd"] Mar 07 09:18:00 crc kubenswrapper[4761]: E0307 09:18:00.358395 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="extract-content" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.358431 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="extract-content" Mar 07 09:18:00 crc kubenswrapper[4761]: E0307 09:18:00.358463 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="registry-server" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.358469 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="registry-server" Mar 07 09:18:00 crc kubenswrapper[4761]: E0307 09:18:00.358494 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39691e56-a95c-4f7c-827a-d88b17d628f4" containerName="oc" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.358501 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="39691e56-a95c-4f7c-827a-d88b17d628f4" containerName="oc" Mar 07 09:18:00 crc kubenswrapper[4761]: E0307 09:18:00.358518 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="extract-utilities" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.358524 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="extract-utilities" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.362361 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="39691e56-a95c-4f7c-827a-d88b17d628f4" containerName="oc" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.362401 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="registry-server" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.368632 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547918-24jnd" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.383006 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.382986 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.384792 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.447951 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547918-24jnd"] Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.505810 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzsm6\" (UniqueName: \"kubernetes.io/projected/ca0bc391-b6d1-4d68-a5dd-047c1f5f5009-kube-api-access-lzsm6\") pod \"auto-csr-approver-29547918-24jnd\" (UID: \"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009\") " pod="openshift-infra/auto-csr-approver-29547918-24jnd" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.610013 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzsm6\" (UniqueName: \"kubernetes.io/projected/ca0bc391-b6d1-4d68-a5dd-047c1f5f5009-kube-api-access-lzsm6\") pod \"auto-csr-approver-29547918-24jnd\" (UID: \"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009\") " pod="openshift-infra/auto-csr-approver-29547918-24jnd" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.658871 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzsm6\" (UniqueName: \"kubernetes.io/projected/ca0bc391-b6d1-4d68-a5dd-047c1f5f5009-kube-api-access-lzsm6\") pod \"auto-csr-approver-29547918-24jnd\" (UID: \"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009\") " pod="openshift-infra/auto-csr-approver-29547918-24jnd" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.700912 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547918-24jnd" Mar 07 09:18:01 crc kubenswrapper[4761]: I0307 09:18:01.453823 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547918-24jnd"] Mar 07 09:18:02 crc kubenswrapper[4761]: I0307 09:18:02.270263 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547918-24jnd" event={"ID":"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009","Type":"ContainerStarted","Data":"ec600122abf639da588e494a79551fa0fc3051af66e3d3b47b21e0c6113a9d17"} Mar 07 09:18:04 crc kubenswrapper[4761]: I0307 09:18:04.305302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547918-24jnd" event={"ID":"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009","Type":"ContainerStarted","Data":"603911c1614386a9b343d6f7ef703fbd8a5bf73eba19af2e1b950c8408682339"} Mar 07 09:18:04 crc kubenswrapper[4761]: I0307 09:18:04.328506 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547918-24jnd" podStartSLOduration=3.459717392 podStartE2EDuration="4.327069402s" podCreationTimestamp="2026-03-07 09:18:00 +0000 UTC" firstStartedPulling="2026-03-07 09:18:01.480901432 +0000 UTC m=+5338.390067907" lastFinishedPulling="2026-03-07 09:18:02.348253442 +0000 UTC m=+5339.257419917" observedRunningTime="2026-03-07 09:18:04.323656558 +0000 UTC m=+5341.232823033" watchObservedRunningTime="2026-03-07 09:18:04.327069402 +0000 UTC m=+5341.236235877" Mar 07 09:18:05 crc kubenswrapper[4761]: I0307 09:18:05.320106 4761 generic.go:334] "Generic (PLEG): container finished" podID="ca0bc391-b6d1-4d68-a5dd-047c1f5f5009" containerID="603911c1614386a9b343d6f7ef703fbd8a5bf73eba19af2e1b950c8408682339" exitCode=0 Mar 07 09:18:05 crc kubenswrapper[4761]: I0307 09:18:05.320179 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547918-24jnd" event={"ID":"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009","Type":"ContainerDied","Data":"603911c1614386a9b343d6f7ef703fbd8a5bf73eba19af2e1b950c8408682339"} Mar 07 09:18:07 crc kubenswrapper[4761]: I0307 09:18:07.056651 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547918-24jnd" Mar 07 09:18:07 crc kubenswrapper[4761]: I0307 09:18:07.172757 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzsm6\" (UniqueName: \"kubernetes.io/projected/ca0bc391-b6d1-4d68-a5dd-047c1f5f5009-kube-api-access-lzsm6\") pod \"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009\" (UID: \"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009\") " Mar 07 09:18:07 crc kubenswrapper[4761]: I0307 09:18:07.191048 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca0bc391-b6d1-4d68-a5dd-047c1f5f5009-kube-api-access-lzsm6" (OuterVolumeSpecName: "kube-api-access-lzsm6") pod "ca0bc391-b6d1-4d68-a5dd-047c1f5f5009" (UID: "ca0bc391-b6d1-4d68-a5dd-047c1f5f5009"). InnerVolumeSpecName "kube-api-access-lzsm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:18:07 crc kubenswrapper[4761]: I0307 09:18:07.276186 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzsm6\" (UniqueName: \"kubernetes.io/projected/ca0bc391-b6d1-4d68-a5dd-047c1f5f5009-kube-api-access-lzsm6\") on node \"crc\" DevicePath \"\"" Mar 07 09:18:07 crc kubenswrapper[4761]: I0307 09:18:07.348822 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547918-24jnd" event={"ID":"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009","Type":"ContainerDied","Data":"ec600122abf639da588e494a79551fa0fc3051af66e3d3b47b21e0c6113a9d17"} Mar 07 09:18:07 crc kubenswrapper[4761]: I0307 09:18:07.348882 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547918-24jnd" Mar 07 09:18:07 crc kubenswrapper[4761]: I0307 09:18:07.349395 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec600122abf639da588e494a79551fa0fc3051af66e3d3b47b21e0c6113a9d17" Mar 07 09:18:08 crc kubenswrapper[4761]: I0307 09:18:08.149183 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547912-49bh4"] Mar 07 09:18:08 crc kubenswrapper[4761]: I0307 09:18:08.161595 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547912-49bh4"] Mar 07 09:18:09 crc kubenswrapper[4761]: I0307 09:18:09.721460 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a7900d-f79e-4ea3-92bb-9d0af09ee62f" path="/var/lib/kubelet/pods/24a7900d-f79e-4ea3-92bb-9d0af09ee62f/volumes" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.278009 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bb42n"] Mar 07 09:18:11 crc kubenswrapper[4761]: E0307 09:18:11.278992 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0bc391-b6d1-4d68-a5dd-047c1f5f5009" containerName="oc" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.279011 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0bc391-b6d1-4d68-a5dd-047c1f5f5009" containerName="oc" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.279274 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca0bc391-b6d1-4d68-a5dd-047c1f5f5009" containerName="oc" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.280931 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.304863 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb42n"] Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.375151 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mkkw\" (UniqueName: \"kubernetes.io/projected/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-kube-api-access-7mkkw\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.375226 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-utilities\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.375300 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-catalog-content\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.477353 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mkkw\" (UniqueName: \"kubernetes.io/projected/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-kube-api-access-7mkkw\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.477441 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-utilities\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.477587 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-catalog-content\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.478187 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-utilities\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.478325 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-catalog-content\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.500067 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mkkw\" (UniqueName: \"kubernetes.io/projected/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-kube-api-access-7mkkw\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.611661 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:12 crc kubenswrapper[4761]: I0307 09:18:12.144651 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb42n"] Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.429171 4761 generic.go:334] "Generic (PLEG): container finished" podID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerID="77a85ce1faac7038983a49d54e58da66a09319237b7a2f98018549927b388672" exitCode=0 Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.429275 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb42n" event={"ID":"174e8da3-c9b3-46a1-bdb2-9c59da7067f0","Type":"ContainerDied","Data":"77a85ce1faac7038983a49d54e58da66a09319237b7a2f98018549927b388672"} Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.429665 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb42n" event={"ID":"174e8da3-c9b3-46a1-bdb2-9c59da7067f0","Type":"ContainerStarted","Data":"773504c068f8e6ac89b23bb1465844eba6fcce1aba9922a84139d367f28a352d"} Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.664397 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8bsc6"] Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.670047 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.680739 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8bsc6"] Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.730810 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-utilities\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.730917 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-catalog-content\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.731095 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f55h8\" (UniqueName: \"kubernetes.io/projected/11b69907-7369-4e80-9e25-8f2d2c0f72f0-kube-api-access-f55h8\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.769052 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.769105 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.832895 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-utilities\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.832984 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-catalog-content\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.833053 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f55h8\" (UniqueName: \"kubernetes.io/projected/11b69907-7369-4e80-9e25-8f2d2c0f72f0-kube-api-access-f55h8\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.833440 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-catalog-content\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.833480 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-utilities\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.854555 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f55h8\" (UniqueName: \"kubernetes.io/projected/11b69907-7369-4e80-9e25-8f2d2c0f72f0-kube-api-access-f55h8\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:14 crc kubenswrapper[4761]: I0307 09:18:14.003094 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:14 crc kubenswrapper[4761]: I0307 09:18:14.442563 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb42n" event={"ID":"174e8da3-c9b3-46a1-bdb2-9c59da7067f0","Type":"ContainerStarted","Data":"6d7c8644a4813575fca5e3b50d8c2857c6eaf51f98bd7a2d8fc0fae2e917df33"} Mar 07 09:18:14 crc kubenswrapper[4761]: I0307 09:18:14.575364 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8bsc6"] Mar 07 09:18:14 crc kubenswrapper[4761]: W0307 09:18:14.580841 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11b69907_7369_4e80_9e25_8f2d2c0f72f0.slice/crio-98f7ff4969c762f80a0c59a1ed92836726edd04ce10345b5f08f75217eaa95a1 WatchSource:0}: Error finding container 98f7ff4969c762f80a0c59a1ed92836726edd04ce10345b5f08f75217eaa95a1: Status 404 returned error can't find the container with id 98f7ff4969c762f80a0c59a1ed92836726edd04ce10345b5f08f75217eaa95a1 Mar 07 09:18:15 crc kubenswrapper[4761]: I0307 09:18:15.457551 4761 generic.go:334] "Generic (PLEG): container finished" podID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerID="c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f" exitCode=0 Mar 07 09:18:15 crc kubenswrapper[4761]: I0307 09:18:15.457610 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bsc6" event={"ID":"11b69907-7369-4e80-9e25-8f2d2c0f72f0","Type":"ContainerDied","Data":"c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f"} Mar 07 09:18:15 crc kubenswrapper[4761]: I0307 09:18:15.457908 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bsc6" event={"ID":"11b69907-7369-4e80-9e25-8f2d2c0f72f0","Type":"ContainerStarted","Data":"98f7ff4969c762f80a0c59a1ed92836726edd04ce10345b5f08f75217eaa95a1"} Mar 07 09:18:16 crc kubenswrapper[4761]: I0307 09:18:16.473270 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bsc6" event={"ID":"11b69907-7369-4e80-9e25-8f2d2c0f72f0","Type":"ContainerStarted","Data":"a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2"} Mar 07 09:18:16 crc kubenswrapper[4761]: I0307 09:18:16.477398 4761 generic.go:334] "Generic (PLEG): container finished" podID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerID="6d7c8644a4813575fca5e3b50d8c2857c6eaf51f98bd7a2d8fc0fae2e917df33" exitCode=0 Mar 07 09:18:16 crc kubenswrapper[4761]: I0307 09:18:16.477442 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb42n" event={"ID":"174e8da3-c9b3-46a1-bdb2-9c59da7067f0","Type":"ContainerDied","Data":"6d7c8644a4813575fca5e3b50d8c2857c6eaf51f98bd7a2d8fc0fae2e917df33"} Mar 07 09:18:17 crc kubenswrapper[4761]: I0307 09:18:17.493835 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb42n" event={"ID":"174e8da3-c9b3-46a1-bdb2-9c59da7067f0","Type":"ContainerStarted","Data":"d6c6124f40a6b60ad91a817cdccfcbe3dd8b7279a6a94e3bbf18bf754e390b4b"} Mar 07 09:18:17 crc kubenswrapper[4761]: I0307 09:18:17.522211 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bb42n" podStartSLOduration=2.8747719309999997 podStartE2EDuration="6.522187956s" podCreationTimestamp="2026-03-07 09:18:11 +0000 UTC" firstStartedPulling="2026-03-07 09:18:13.431637367 +0000 UTC m=+5350.340803842" lastFinishedPulling="2026-03-07 09:18:17.079053392 +0000 UTC m=+5353.988219867" observedRunningTime="2026-03-07 09:18:17.51306388 +0000 UTC m=+5354.422230375" watchObservedRunningTime="2026-03-07 09:18:17.522187956 +0000 UTC m=+5354.431354431" Mar 07 09:18:19 crc kubenswrapper[4761]: I0307 09:18:19.523922 4761 generic.go:334] "Generic (PLEG): container finished" podID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerID="a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2" exitCode=0 Mar 07 09:18:19 crc kubenswrapper[4761]: I0307 09:18:19.524413 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bsc6" event={"ID":"11b69907-7369-4e80-9e25-8f2d2c0f72f0","Type":"ContainerDied","Data":"a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2"} Mar 07 09:18:20 crc kubenswrapper[4761]: I0307 09:18:20.536653 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bsc6" event={"ID":"11b69907-7369-4e80-9e25-8f2d2c0f72f0","Type":"ContainerStarted","Data":"0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa"} Mar 07 09:18:20 crc kubenswrapper[4761]: I0307 09:18:20.560612 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8bsc6" podStartSLOduration=3.036132189 podStartE2EDuration="7.560596502s" podCreationTimestamp="2026-03-07 09:18:13 +0000 UTC" firstStartedPulling="2026-03-07 09:18:15.460120547 +0000 UTC m=+5352.369287022" lastFinishedPulling="2026-03-07 09:18:19.98458486 +0000 UTC m=+5356.893751335" observedRunningTime="2026-03-07 09:18:20.558227334 +0000 UTC m=+5357.467393819" watchObservedRunningTime="2026-03-07 09:18:20.560596502 +0000 UTC m=+5357.469762977" Mar 07 09:18:21 crc kubenswrapper[4761]: I0307 09:18:21.613373 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:21 crc kubenswrapper[4761]: I0307 09:18:21.613772 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:22 crc kubenswrapper[4761]: I0307 09:18:22.947747 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-bb42n" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="registry-server" probeResult="failure" output=< Mar 07 09:18:22 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:18:22 crc kubenswrapper[4761]: > Mar 07 09:18:24 crc kubenswrapper[4761]: I0307 09:18:24.004117 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:24 crc kubenswrapper[4761]: I0307 09:18:24.006553 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:25 crc kubenswrapper[4761]: I0307 09:18:25.175626 4761 scope.go:117] "RemoveContainer" containerID="6f37c673145b5f0b43da9649c88fc1a3229b7c0a257204cae85e333d5b0607d1" Mar 07 09:18:25 crc kubenswrapper[4761]: I0307 09:18:25.652220 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8bsc6" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="registry-server" probeResult="failure" output=< Mar 07 09:18:25 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:18:25 crc kubenswrapper[4761]: > Mar 07 09:18:31 crc kubenswrapper[4761]: I0307 09:18:31.690468 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:31 crc kubenswrapper[4761]: I0307 09:18:31.753580 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:31 crc kubenswrapper[4761]: I0307 09:18:31.943766 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb42n"] Mar 07 09:18:33 crc kubenswrapper[4761]: I0307 09:18:33.693139 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bb42n" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="registry-server" containerID="cri-o://d6c6124f40a6b60ad91a817cdccfcbe3dd8b7279a6a94e3bbf18bf754e390b4b" gracePeriod=2 Mar 07 09:18:34 crc kubenswrapper[4761]: I0307 09:18:34.709159 4761 generic.go:334] "Generic (PLEG): container finished" podID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerID="d6c6124f40a6b60ad91a817cdccfcbe3dd8b7279a6a94e3bbf18bf754e390b4b" exitCode=0 Mar 07 09:18:34 crc kubenswrapper[4761]: I0307 09:18:34.709255 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb42n" event={"ID":"174e8da3-c9b3-46a1-bdb2-9c59da7067f0","Type":"ContainerDied","Data":"d6c6124f40a6b60ad91a817cdccfcbe3dd8b7279a6a94e3bbf18bf754e390b4b"} Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.075093 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8bsc6" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="registry-server" probeResult="failure" output=< Mar 07 09:18:35 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:18:35 crc kubenswrapper[4761]: > Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.157778 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.199605 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mkkw\" (UniqueName: \"kubernetes.io/projected/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-kube-api-access-7mkkw\") pod \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.199754 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-utilities\") pod \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.199939 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-catalog-content\") pod \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.203866 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-utilities" (OuterVolumeSpecName: "utilities") pod "174e8da3-c9b3-46a1-bdb2-9c59da7067f0" (UID: "174e8da3-c9b3-46a1-bdb2-9c59da7067f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.228426 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "174e8da3-c9b3-46a1-bdb2-9c59da7067f0" (UID: "174e8da3-c9b3-46a1-bdb2-9c59da7067f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.239918 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-kube-api-access-7mkkw" (OuterVolumeSpecName: "kube-api-access-7mkkw") pod "174e8da3-c9b3-46a1-bdb2-9c59da7067f0" (UID: "174e8da3-c9b3-46a1-bdb2-9c59da7067f0"). InnerVolumeSpecName "kube-api-access-7mkkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.301846 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.301894 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.301905 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mkkw\" (UniqueName: \"kubernetes.io/projected/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-kube-api-access-7mkkw\") on node \"crc\" DevicePath \"\"" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.726273 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.728738 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb42n" event={"ID":"174e8da3-c9b3-46a1-bdb2-9c59da7067f0","Type":"ContainerDied","Data":"773504c068f8e6ac89b23bb1465844eba6fcce1aba9922a84139d367f28a352d"} Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.728802 4761 scope.go:117] "RemoveContainer" containerID="d6c6124f40a6b60ad91a817cdccfcbe3dd8b7279a6a94e3bbf18bf754e390b4b" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.758135 4761 scope.go:117] "RemoveContainer" containerID="6d7c8644a4813575fca5e3b50d8c2857c6eaf51f98bd7a2d8fc0fae2e917df33" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.801648 4761 scope.go:117] "RemoveContainer" containerID="77a85ce1faac7038983a49d54e58da66a09319237b7a2f98018549927b388672" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.801664 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb42n"] Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.817275 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb42n"] Mar 07 09:18:37 crc kubenswrapper[4761]: I0307 09:18:37.723682 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" path="/var/lib/kubelet/pods/174e8da3-c9b3-46a1-bdb2-9c59da7067f0/volumes" Mar 07 09:18:43 crc kubenswrapper[4761]: I0307 09:18:43.768315 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:18:43 crc kubenswrapper[4761]: I0307 09:18:43.768925 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:18:43 crc kubenswrapper[4761]: I0307 09:18:43.768997 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 09:18:43 crc kubenswrapper[4761]: I0307 09:18:43.770069 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:18:43 crc kubenswrapper[4761]: I0307 09:18:43.770118 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" gracePeriod=600 Mar 07 09:18:43 crc kubenswrapper[4761]: E0307 09:18:43.907224 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:18:44 crc kubenswrapper[4761]: I0307 09:18:44.068233 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:44 crc kubenswrapper[4761]: I0307 09:18:44.130146 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:44 crc kubenswrapper[4761]: I0307 09:18:44.850361 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" exitCode=0 Mar 07 09:18:44 crc kubenswrapper[4761]: I0307 09:18:44.851370 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311"} Mar 07 09:18:44 crc kubenswrapper[4761]: I0307 09:18:44.851404 4761 scope.go:117] "RemoveContainer" containerID="45493895bc908f690bc18f8d9a3f4e9f36cdf8af714be35170fe2ff42764c391" Mar 07 09:18:44 crc kubenswrapper[4761]: I0307 09:18:44.852036 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:18:44 crc kubenswrapper[4761]: E0307 09:18:44.852411 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:18:44 crc kubenswrapper[4761]: I0307 09:18:44.875293 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8bsc6"] Mar 07 09:18:45 crc kubenswrapper[4761]: I0307 09:18:45.864534 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8bsc6" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="registry-server" containerID="cri-o://0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa" gracePeriod=2 Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.442782 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.517167 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-catalog-content\") pod \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.517366 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f55h8\" (UniqueName: \"kubernetes.io/projected/11b69907-7369-4e80-9e25-8f2d2c0f72f0-kube-api-access-f55h8\") pod \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.517505 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-utilities\") pod \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.519367 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-utilities" (OuterVolumeSpecName: "utilities") pod "11b69907-7369-4e80-9e25-8f2d2c0f72f0" (UID: "11b69907-7369-4e80-9e25-8f2d2c0f72f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.520324 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.524247 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b69907-7369-4e80-9e25-8f2d2c0f72f0-kube-api-access-f55h8" (OuterVolumeSpecName: "kube-api-access-f55h8") pod "11b69907-7369-4e80-9e25-8f2d2c0f72f0" (UID: "11b69907-7369-4e80-9e25-8f2d2c0f72f0"). InnerVolumeSpecName "kube-api-access-f55h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.598051 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11b69907-7369-4e80-9e25-8f2d2c0f72f0" (UID: "11b69907-7369-4e80-9e25-8f2d2c0f72f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.623146 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f55h8\" (UniqueName: \"kubernetes.io/projected/11b69907-7369-4e80-9e25-8f2d2c0f72f0-kube-api-access-f55h8\") on node \"crc\" DevicePath \"\"" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.623359 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.885612 4761 generic.go:334] "Generic (PLEG): container finished" podID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerID="0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa" exitCode=0 Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.885683 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.885703 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bsc6" event={"ID":"11b69907-7369-4e80-9e25-8f2d2c0f72f0","Type":"ContainerDied","Data":"0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa"} Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.885773 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bsc6" event={"ID":"11b69907-7369-4e80-9e25-8f2d2c0f72f0","Type":"ContainerDied","Data":"98f7ff4969c762f80a0c59a1ed92836726edd04ce10345b5f08f75217eaa95a1"} Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.885805 4761 scope.go:117] "RemoveContainer" containerID="0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.927677 4761 scope.go:117] "RemoveContainer" containerID="a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.944976 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8bsc6"] Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.953054 4761 scope.go:117] "RemoveContainer" containerID="c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.962842 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8bsc6"] Mar 07 09:18:47 crc kubenswrapper[4761]: I0307 09:18:47.017604 4761 scope.go:117] "RemoveContainer" containerID="0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa" Mar 07 09:18:47 crc kubenswrapper[4761]: E0307 09:18:47.020748 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa\": container with ID starting with 0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa not found: ID does not exist" containerID="0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa" Mar 07 09:18:47 crc kubenswrapper[4761]: I0307 09:18:47.020792 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa"} err="failed to get container status \"0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa\": rpc error: code = NotFound desc = could not find container \"0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa\": container with ID starting with 0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa not found: ID does not exist" Mar 07 09:18:47 crc kubenswrapper[4761]: I0307 09:18:47.020814 4761 scope.go:117] "RemoveContainer" containerID="a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2" Mar 07 09:18:47 crc kubenswrapper[4761]: E0307 09:18:47.021208 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2\": container with ID starting with a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2 not found: ID does not exist" containerID="a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2" Mar 07 09:18:47 crc kubenswrapper[4761]: I0307 09:18:47.021234 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2"} err="failed to get container status \"a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2\": rpc error: code = NotFound desc = could not find container \"a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2\": container with ID starting with a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2 not found: ID does not exist" Mar 07 09:18:47 crc kubenswrapper[4761]: I0307 09:18:47.021252 4761 scope.go:117] "RemoveContainer" containerID="c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f" Mar 07 09:18:47 crc kubenswrapper[4761]: E0307 09:18:47.021509 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f\": container with ID starting with c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f not found: ID does not exist" containerID="c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f" Mar 07 09:18:47 crc kubenswrapper[4761]: I0307 09:18:47.021536 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f"} err="failed to get container status \"c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f\": rpc error: code = NotFound desc = could not find container \"c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f\": container with ID starting with c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f not found: ID does not exist" Mar 07 09:18:47 crc kubenswrapper[4761]: I0307 09:18:47.732211 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" path="/var/lib/kubelet/pods/11b69907-7369-4e80-9e25-8f2d2c0f72f0/volumes" Mar 07 09:18:56 crc kubenswrapper[4761]: I0307 09:18:56.037589 4761 generic.go:334] "Generic (PLEG): container finished" podID="4d4f9001-7d67-467b-8028-ec6162564829" containerID="edc3b91ba9c93fdc8b8f4ab8405a9cde976a43eb0938c38a97b875a93e760b4c" exitCode=0 Mar 07 09:18:56 crc kubenswrapper[4761]: I0307 09:18:56.037760 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" event={"ID":"4d4f9001-7d67-467b-8028-ec6162564829","Type":"ContainerDied","Data":"edc3b91ba9c93fdc8b8f4ab8405a9cde976a43eb0938c38a97b875a93e760b4c"} Mar 07 09:18:56 crc kubenswrapper[4761]: I0307 09:18:56.038554 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" event={"ID":"4d4f9001-7d67-467b-8028-ec6162564829","Type":"ContainerStarted","Data":"088b43adef6bec80908b30042392ebdf2079dcff14219f393321549ee7b68dd8"} Mar 07 09:18:57 crc kubenswrapper[4761]: I0307 09:18:57.707676 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:18:57 crc kubenswrapper[4761]: E0307 09:18:57.717022 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:19:12 crc kubenswrapper[4761]: I0307 09:19:12.705633 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:19:12 crc kubenswrapper[4761]: E0307 09:19:12.706851 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:19:14 crc kubenswrapper[4761]: I0307 09:19:14.698682 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 09:19:14 crc kubenswrapper[4761]: I0307 09:19:14.699050 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 09:19:23 crc kubenswrapper[4761]: I0307 09:19:23.723161 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:19:23 crc kubenswrapper[4761]: E0307 09:19:23.724396 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:19:34 crc kubenswrapper[4761]: I0307 09:19:34.707683 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 09:19:34 crc kubenswrapper[4761]: I0307 09:19:34.716542 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 09:19:36 crc kubenswrapper[4761]: I0307 09:19:36.706535 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:19:36 crc kubenswrapper[4761]: E0307 09:19:36.707520 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.360009 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2pw4z"] Mar 07 09:19:50 crc kubenswrapper[4761]: E0307 09:19:50.362187 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="extract-content" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.362317 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="extract-content" Mar 07 09:19:50 crc kubenswrapper[4761]: E0307 09:19:50.362414 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="registry-server" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.362491 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="registry-server" Mar 07 09:19:50 crc kubenswrapper[4761]: E0307 09:19:50.362572 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="extract-utilities" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.362649 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="extract-utilities" Mar 07 09:19:50 crc kubenswrapper[4761]: E0307 09:19:50.362748 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="registry-server" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.362834 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="registry-server" Mar 07 09:19:50 crc kubenswrapper[4761]: E0307 09:19:50.362926 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="extract-content" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.363008 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="extract-content" Mar 07 09:19:50 crc kubenswrapper[4761]: E0307 09:19:50.363120 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="extract-utilities" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.363205 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="extract-utilities" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.363604 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="registry-server" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.363740 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="registry-server" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.365934 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.374557 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2pw4z"] Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.498934 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-utilities\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.499040 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-catalog-content\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.499124 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdvsh\" (UniqueName: \"kubernetes.io/projected/c3a4893e-3950-430e-81ec-aaf676f073c0-kube-api-access-pdvsh\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.601006 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-utilities\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.601302 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-catalog-content\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.601370 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdvsh\" (UniqueName: \"kubernetes.io/projected/c3a4893e-3950-430e-81ec-aaf676f073c0-kube-api-access-pdvsh\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.601524 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-utilities\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.601791 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-catalog-content\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.626564 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdvsh\" (UniqueName: \"kubernetes.io/projected/c3a4893e-3950-430e-81ec-aaf676f073c0-kube-api-access-pdvsh\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.686368 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.705545 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:19:50 crc kubenswrapper[4761]: E0307 09:19:50.705926 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:19:51 crc kubenswrapper[4761]: I0307 09:19:51.195603 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2pw4z"] Mar 07 09:19:51 crc kubenswrapper[4761]: I0307 09:19:51.928173 4761 generic.go:334] "Generic (PLEG): container finished" podID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerID="a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557" exitCode=0 Mar 07 09:19:51 crc kubenswrapper[4761]: I0307 09:19:51.928239 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pw4z" event={"ID":"c3a4893e-3950-430e-81ec-aaf676f073c0","Type":"ContainerDied","Data":"a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557"} Mar 07 09:19:51 crc kubenswrapper[4761]: I0307 09:19:51.928430 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pw4z" event={"ID":"c3a4893e-3950-430e-81ec-aaf676f073c0","Type":"ContainerStarted","Data":"5829eb14d37ddc532fa83563cbaeb1146a90b8052c14f005e3d33f4ce41e0cbd"} Mar 07 09:19:52 crc kubenswrapper[4761]: I0307 09:19:52.940976 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pw4z" event={"ID":"c3a4893e-3950-430e-81ec-aaf676f073c0","Type":"ContainerStarted","Data":"a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f"} Mar 07 09:19:59 crc kubenswrapper[4761]: I0307 09:19:59.017684 4761 generic.go:334] "Generic (PLEG): container finished" podID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerID="a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f" exitCode=0 Mar 07 09:19:59 crc kubenswrapper[4761]: I0307 09:19:59.017813 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pw4z" event={"ID":"c3a4893e-3950-430e-81ec-aaf676f073c0","Type":"ContainerDied","Data":"a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f"} Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.039525 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pw4z" event={"ID":"c3a4893e-3950-430e-81ec-aaf676f073c0","Type":"ContainerStarted","Data":"4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b"} Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.071463 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2pw4z" podStartSLOduration=2.580673532 podStartE2EDuration="10.071438878s" podCreationTimestamp="2026-03-07 09:19:50 +0000 UTC" firstStartedPulling="2026-03-07 09:19:51.93147917 +0000 UTC m=+5448.840645645" lastFinishedPulling="2026-03-07 09:19:59.422244526 +0000 UTC m=+5456.331410991" observedRunningTime="2026-03-07 09:20:00.06022594 +0000 UTC m=+5456.969392455" watchObservedRunningTime="2026-03-07 09:20:00.071438878 +0000 UTC m=+5456.980605363" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.153138 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547920-sj747"] Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.159531 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547920-sj747" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.162440 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.162593 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.164513 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.165203 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547920-sj747"] Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.247216 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npt7g\" (UniqueName: \"kubernetes.io/projected/582dd6f3-adc8-4933-b406-bd096570fbbf-kube-api-access-npt7g\") pod \"auto-csr-approver-29547920-sj747\" (UID: \"582dd6f3-adc8-4933-b406-bd096570fbbf\") " pod="openshift-infra/auto-csr-approver-29547920-sj747" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.349754 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npt7g\" (UniqueName: \"kubernetes.io/projected/582dd6f3-adc8-4933-b406-bd096570fbbf-kube-api-access-npt7g\") pod \"auto-csr-approver-29547920-sj747\" (UID: \"582dd6f3-adc8-4933-b406-bd096570fbbf\") " pod="openshift-infra/auto-csr-approver-29547920-sj747" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.372502 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npt7g\" (UniqueName: \"kubernetes.io/projected/582dd6f3-adc8-4933-b406-bd096570fbbf-kube-api-access-npt7g\") pod \"auto-csr-approver-29547920-sj747\" (UID: \"582dd6f3-adc8-4933-b406-bd096570fbbf\") " pod="openshift-infra/auto-csr-approver-29547920-sj747" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.483649 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547920-sj747" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.687077 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.687416 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:20:01 crc kubenswrapper[4761]: I0307 09:20:01.077582 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547920-sj747"] Mar 07 09:20:01 crc kubenswrapper[4761]: I0307 09:20:01.705688 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:20:01 crc kubenswrapper[4761]: E0307 09:20:01.706255 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:20:01 crc kubenswrapper[4761]: I0307 09:20:01.752093 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2pw4z" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" probeResult="failure" output=< Mar 07 09:20:01 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:20:01 crc kubenswrapper[4761]: > Mar 07 09:20:02 crc kubenswrapper[4761]: I0307 09:20:02.062079 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547920-sj747" event={"ID":"582dd6f3-adc8-4933-b406-bd096570fbbf","Type":"ContainerStarted","Data":"ef70c104ec5c86674cebe3be4c8ada7d7f7a0831b7e062c685ba8d1ef510da31"} Mar 07 09:20:03 crc kubenswrapper[4761]: I0307 09:20:03.081319 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547920-sj747" event={"ID":"582dd6f3-adc8-4933-b406-bd096570fbbf","Type":"ContainerStarted","Data":"861955b25ecb851fdf0a059445979f1c93ff558bd5e94bf49b3f0234932445b2"} Mar 07 09:20:03 crc kubenswrapper[4761]: I0307 09:20:03.099321 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547920-sj747" podStartSLOduration=2.073586725 podStartE2EDuration="3.099303064s" podCreationTimestamp="2026-03-07 09:20:00 +0000 UTC" firstStartedPulling="2026-03-07 09:20:01.085963679 +0000 UTC m=+5457.995130154" lastFinishedPulling="2026-03-07 09:20:02.111680018 +0000 UTC m=+5459.020846493" observedRunningTime="2026-03-07 09:20:03.096116455 +0000 UTC m=+5460.005282920" watchObservedRunningTime="2026-03-07 09:20:03.099303064 +0000 UTC m=+5460.008469539" Mar 07 09:20:05 crc kubenswrapper[4761]: I0307 09:20:05.114028 4761 generic.go:334] "Generic (PLEG): container finished" podID="582dd6f3-adc8-4933-b406-bd096570fbbf" containerID="861955b25ecb851fdf0a059445979f1c93ff558bd5e94bf49b3f0234932445b2" exitCode=0 Mar 07 09:20:05 crc kubenswrapper[4761]: I0307 09:20:05.114093 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547920-sj747" event={"ID":"582dd6f3-adc8-4933-b406-bd096570fbbf","Type":"ContainerDied","Data":"861955b25ecb851fdf0a059445979f1c93ff558bd5e94bf49b3f0234932445b2"} Mar 07 09:20:06 crc kubenswrapper[4761]: I0307 09:20:06.600304 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547920-sj747" Mar 07 09:20:06 crc kubenswrapper[4761]: I0307 09:20:06.714180 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npt7g\" (UniqueName: \"kubernetes.io/projected/582dd6f3-adc8-4933-b406-bd096570fbbf-kube-api-access-npt7g\") pod \"582dd6f3-adc8-4933-b406-bd096570fbbf\" (UID: \"582dd6f3-adc8-4933-b406-bd096570fbbf\") " Mar 07 09:20:06 crc kubenswrapper[4761]: I0307 09:20:06.726050 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582dd6f3-adc8-4933-b406-bd096570fbbf-kube-api-access-npt7g" (OuterVolumeSpecName: "kube-api-access-npt7g") pod "582dd6f3-adc8-4933-b406-bd096570fbbf" (UID: "582dd6f3-adc8-4933-b406-bd096570fbbf"). InnerVolumeSpecName "kube-api-access-npt7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:20:06 crc kubenswrapper[4761]: I0307 09:20:06.817832 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npt7g\" (UniqueName: \"kubernetes.io/projected/582dd6f3-adc8-4933-b406-bd096570fbbf-kube-api-access-npt7g\") on node \"crc\" DevicePath \"\"" Mar 07 09:20:07 crc kubenswrapper[4761]: I0307 09:20:07.143657 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547920-sj747" event={"ID":"582dd6f3-adc8-4933-b406-bd096570fbbf","Type":"ContainerDied","Data":"ef70c104ec5c86674cebe3be4c8ada7d7f7a0831b7e062c685ba8d1ef510da31"} Mar 07 09:20:07 crc kubenswrapper[4761]: I0307 09:20:07.143699 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef70c104ec5c86674cebe3be4c8ada7d7f7a0831b7e062c685ba8d1ef510da31" Mar 07 09:20:07 crc kubenswrapper[4761]: I0307 09:20:07.143795 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547920-sj747" Mar 07 09:20:07 crc kubenswrapper[4761]: I0307 09:20:07.198969 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547914-mnrtr"] Mar 07 09:20:07 crc kubenswrapper[4761]: I0307 09:20:07.211613 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547914-mnrtr"] Mar 07 09:20:07 crc kubenswrapper[4761]: I0307 09:20:07.730370 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8285a2d6-1653-46b3-ac0e-481bf33fa2e0" path="/var/lib/kubelet/pods/8285a2d6-1653-46b3-ac0e-481bf33fa2e0/volumes" Mar 07 09:20:11 crc kubenswrapper[4761]: I0307 09:20:11.746258 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2pw4z" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" probeResult="failure" output=< Mar 07 09:20:11 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:20:11 crc kubenswrapper[4761]: > Mar 07 09:20:12 crc kubenswrapper[4761]: I0307 09:20:12.706140 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:20:12 crc kubenswrapper[4761]: E0307 09:20:12.706663 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:20:21 crc kubenswrapper[4761]: I0307 09:20:21.743397 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2pw4z" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" probeResult="failure" output=< Mar 07 09:20:21 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:20:21 crc kubenswrapper[4761]: > Mar 07 09:20:25 crc kubenswrapper[4761]: I0307 09:20:25.475826 4761 scope.go:117] "RemoveContainer" containerID="eea4e62f70ef92b7ddd8ac5f32ab9d1a9b500bbe7bcace62e37bbaeaf124c8b6" Mar 07 09:20:27 crc kubenswrapper[4761]: I0307 09:20:27.706256 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:20:27 crc kubenswrapper[4761]: E0307 09:20:27.707346 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:20:31 crc kubenswrapper[4761]: I0307 09:20:31.756908 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2pw4z" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" probeResult="failure" output=< Mar 07 09:20:31 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:20:31 crc kubenswrapper[4761]: > Mar 07 09:20:40 crc kubenswrapper[4761]: I0307 09:20:40.749383 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:20:40 crc kubenswrapper[4761]: I0307 09:20:40.814195 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:20:40 crc kubenswrapper[4761]: I0307 09:20:40.999834 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2pw4z"] Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.061422 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:20:43 crc kubenswrapper[4761]: E0307 09:20:43.062566 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.079790 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2pw4z" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" containerID="cri-o://4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b" gracePeriod=2 Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.885345 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.986846 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-utilities\") pod \"c3a4893e-3950-430e-81ec-aaf676f073c0\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.987151 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-catalog-content\") pod \"c3a4893e-3950-430e-81ec-aaf676f073c0\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.987299 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdvsh\" (UniqueName: \"kubernetes.io/projected/c3a4893e-3950-430e-81ec-aaf676f073c0-kube-api-access-pdvsh\") pod \"c3a4893e-3950-430e-81ec-aaf676f073c0\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.987326 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-utilities" (OuterVolumeSpecName: "utilities") pod "c3a4893e-3950-430e-81ec-aaf676f073c0" (UID: "c3a4893e-3950-430e-81ec-aaf676f073c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.988256 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.997148 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a4893e-3950-430e-81ec-aaf676f073c0-kube-api-access-pdvsh" (OuterVolumeSpecName: "kube-api-access-pdvsh") pod "c3a4893e-3950-430e-81ec-aaf676f073c0" (UID: "c3a4893e-3950-430e-81ec-aaf676f073c0"). InnerVolumeSpecName "kube-api-access-pdvsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.091280 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdvsh\" (UniqueName: \"kubernetes.io/projected/c3a4893e-3950-430e-81ec-aaf676f073c0-kube-api-access-pdvsh\") on node \"crc\" DevicePath \"\"" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.094825 4761 generic.go:334] "Generic (PLEG): container finished" podID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerID="4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b" exitCode=0 Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.094879 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pw4z" event={"ID":"c3a4893e-3950-430e-81ec-aaf676f073c0","Type":"ContainerDied","Data":"4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b"} Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.094908 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pw4z" event={"ID":"c3a4893e-3950-430e-81ec-aaf676f073c0","Type":"ContainerDied","Data":"5829eb14d37ddc532fa83563cbaeb1146a90b8052c14f005e3d33f4ce41e0cbd"} Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.094925 4761 scope.go:117] "RemoveContainer" containerID="4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.094985 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.104876 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3a4893e-3950-430e-81ec-aaf676f073c0" (UID: "c3a4893e-3950-430e-81ec-aaf676f073c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.118187 4761 scope.go:117] "RemoveContainer" containerID="a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.138699 4761 scope.go:117] "RemoveContainer" containerID="a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.193679 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.204524 4761 scope.go:117] "RemoveContainer" containerID="4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b" Mar 07 09:20:44 crc kubenswrapper[4761]: E0307 09:20:44.205091 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b\": container with ID starting with 4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b not found: ID does not exist" containerID="4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.205143 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b"} err="failed to get container status \"4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b\": rpc error: code = NotFound desc = could not find container \"4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b\": container with ID starting with 4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b not found: ID does not exist" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.205164 4761 scope.go:117] "RemoveContainer" containerID="a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f" Mar 07 09:20:44 crc kubenswrapper[4761]: E0307 09:20:44.205495 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f\": container with ID starting with a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f not found: ID does not exist" containerID="a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.205533 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f"} err="failed to get container status \"a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f\": rpc error: code = NotFound desc = could not find container \"a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f\": container with ID starting with a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f not found: ID does not exist" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.205562 4761 scope.go:117] "RemoveContainer" containerID="a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557" Mar 07 09:20:44 crc kubenswrapper[4761]: E0307 09:20:44.206089 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557\": container with ID starting with a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557 not found: ID does not exist" containerID="a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.206143 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557"} err="failed to get container status \"a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557\": rpc error: code = NotFound desc = could not find container \"a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557\": container with ID starting with a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557 not found: ID does not exist" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.447272 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2pw4z"] Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.464702 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2pw4z"] Mar 07 09:20:45 crc kubenswrapper[4761]: I0307 09:20:45.746417 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" path="/var/lib/kubelet/pods/c3a4893e-3950-430e-81ec-aaf676f073c0/volumes" Mar 07 09:20:56 crc kubenswrapper[4761]: I0307 09:20:56.706050 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:20:56 crc kubenswrapper[4761]: E0307 09:20:56.707439 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:21:08 crc kubenswrapper[4761]: I0307 09:21:08.707898 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:21:08 crc kubenswrapper[4761]: E0307 09:21:08.709108 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:21:21 crc kubenswrapper[4761]: I0307 09:21:21.707788 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:21:21 crc kubenswrapper[4761]: E0307 09:21:21.708565 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:21:35 crc kubenswrapper[4761]: I0307 09:21:35.707269 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:21:35 crc kubenswrapper[4761]: E0307 09:21:35.708419 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:21:49 crc kubenswrapper[4761]: I0307 09:21:49.706512 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:21:49 crc kubenswrapper[4761]: E0307 09:21:49.707595 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.209794 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547922-mbgmv"] Mar 07 09:22:00 crc kubenswrapper[4761]: E0307 09:22:00.211048 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="extract-content" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.211066 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="extract-content" Mar 07 09:22:00 crc kubenswrapper[4761]: E0307 09:22:00.211122 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.211215 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" Mar 07 09:22:00 crc kubenswrapper[4761]: E0307 09:22:00.211232 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582dd6f3-adc8-4933-b406-bd096570fbbf" containerName="oc" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.211239 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="582dd6f3-adc8-4933-b406-bd096570fbbf" containerName="oc" Mar 07 09:22:00 crc kubenswrapper[4761]: E0307 09:22:00.211257 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="extract-utilities" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.211267 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="extract-utilities" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.211575 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="582dd6f3-adc8-4933-b406-bd096570fbbf" containerName="oc" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.211597 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.212590 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.215178 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.215209 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.217272 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.225907 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547922-mbgmv"] Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.236525 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9hdc\" (UniqueName: \"kubernetes.io/projected/cb4b54e0-9e87-43bd-99c1-dd0fb9027801-kube-api-access-s9hdc\") pod \"auto-csr-approver-29547922-mbgmv\" (UID: \"cb4b54e0-9e87-43bd-99c1-dd0fb9027801\") " pod="openshift-infra/auto-csr-approver-29547922-mbgmv" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.338826 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9hdc\" (UniqueName: \"kubernetes.io/projected/cb4b54e0-9e87-43bd-99c1-dd0fb9027801-kube-api-access-s9hdc\") pod \"auto-csr-approver-29547922-mbgmv\" (UID: \"cb4b54e0-9e87-43bd-99c1-dd0fb9027801\") " pod="openshift-infra/auto-csr-approver-29547922-mbgmv" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.389584 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9hdc\" (UniqueName: \"kubernetes.io/projected/cb4b54e0-9e87-43bd-99c1-dd0fb9027801-kube-api-access-s9hdc\") pod \"auto-csr-approver-29547922-mbgmv\" (UID: \"cb4b54e0-9e87-43bd-99c1-dd0fb9027801\") " pod="openshift-infra/auto-csr-approver-29547922-mbgmv" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.570971 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.706419 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:22:00 crc kubenswrapper[4761]: E0307 09:22:00.707096 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:22:01 crc kubenswrapper[4761]: I0307 09:22:01.327524 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547922-mbgmv"] Mar 07 09:22:01 crc kubenswrapper[4761]: I0307 09:22:01.357560 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:22:02 crc kubenswrapper[4761]: I0307 09:22:02.153342 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" event={"ID":"cb4b54e0-9e87-43bd-99c1-dd0fb9027801","Type":"ContainerStarted","Data":"2b5e6633bebba4965e5d914b164c8f1dfb1e790a7bf666ee745f60e543d0388d"} Mar 07 09:22:04 crc kubenswrapper[4761]: I0307 09:22:04.176933 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" event={"ID":"cb4b54e0-9e87-43bd-99c1-dd0fb9027801","Type":"ContainerStarted","Data":"9cdb2507f999d812dc16f4ffd1e90008e63bd681015d6c2edcad8e1db5068010"} Mar 07 09:22:04 crc kubenswrapper[4761]: I0307 09:22:04.198463 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" podStartSLOduration=2.599667373 podStartE2EDuration="4.19844183s" podCreationTimestamp="2026-03-07 09:22:00 +0000 UTC" firstStartedPulling="2026-03-07 09:22:01.356508445 +0000 UTC m=+5578.265674920" lastFinishedPulling="2026-03-07 09:22:02.955282902 +0000 UTC m=+5579.864449377" observedRunningTime="2026-03-07 09:22:04.189451198 +0000 UTC m=+5581.098617683" watchObservedRunningTime="2026-03-07 09:22:04.19844183 +0000 UTC m=+5581.107608305" Mar 07 09:22:05 crc kubenswrapper[4761]: I0307 09:22:05.200013 4761 generic.go:334] "Generic (PLEG): container finished" podID="cb4b54e0-9e87-43bd-99c1-dd0fb9027801" containerID="9cdb2507f999d812dc16f4ffd1e90008e63bd681015d6c2edcad8e1db5068010" exitCode=0 Mar 07 09:22:05 crc kubenswrapper[4761]: I0307 09:22:05.200558 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" event={"ID":"cb4b54e0-9e87-43bd-99c1-dd0fb9027801","Type":"ContainerDied","Data":"9cdb2507f999d812dc16f4ffd1e90008e63bd681015d6c2edcad8e1db5068010"} Mar 07 09:22:06 crc kubenswrapper[4761]: I0307 09:22:06.708284 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" Mar 07 09:22:06 crc kubenswrapper[4761]: I0307 09:22:06.725635 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9hdc\" (UniqueName: \"kubernetes.io/projected/cb4b54e0-9e87-43bd-99c1-dd0fb9027801-kube-api-access-s9hdc\") pod \"cb4b54e0-9e87-43bd-99c1-dd0fb9027801\" (UID: \"cb4b54e0-9e87-43bd-99c1-dd0fb9027801\") " Mar 07 09:22:06 crc kubenswrapper[4761]: I0307 09:22:06.733036 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb4b54e0-9e87-43bd-99c1-dd0fb9027801-kube-api-access-s9hdc" (OuterVolumeSpecName: "kube-api-access-s9hdc") pod "cb4b54e0-9e87-43bd-99c1-dd0fb9027801" (UID: "cb4b54e0-9e87-43bd-99c1-dd0fb9027801"). InnerVolumeSpecName "kube-api-access-s9hdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:22:06 crc kubenswrapper[4761]: I0307 09:22:06.828531 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9hdc\" (UniqueName: \"kubernetes.io/projected/cb4b54e0-9e87-43bd-99c1-dd0fb9027801-kube-api-access-s9hdc\") on node \"crc\" DevicePath \"\"" Mar 07 09:22:06 crc kubenswrapper[4761]: I0307 09:22:06.835095 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547916-42c74"] Mar 07 09:22:06 crc kubenswrapper[4761]: I0307 09:22:06.844869 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547916-42c74"] Mar 07 09:22:07 crc kubenswrapper[4761]: I0307 09:22:07.235992 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" event={"ID":"cb4b54e0-9e87-43bd-99c1-dd0fb9027801","Type":"ContainerDied","Data":"2b5e6633bebba4965e5d914b164c8f1dfb1e790a7bf666ee745f60e543d0388d"} Mar 07 09:22:07 crc kubenswrapper[4761]: I0307 09:22:07.236035 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b5e6633bebba4965e5d914b164c8f1dfb1e790a7bf666ee745f60e543d0388d" Mar 07 09:22:07 crc kubenswrapper[4761]: I0307 09:22:07.236072 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" Mar 07 09:22:07 crc kubenswrapper[4761]: I0307 09:22:07.726429 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39691e56-a95c-4f7c-827a-d88b17d628f4" path="/var/lib/kubelet/pods/39691e56-a95c-4f7c-827a-d88b17d628f4/volumes" Mar 07 09:22:14 crc kubenswrapper[4761]: I0307 09:22:14.705958 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:22:14 crc kubenswrapper[4761]: E0307 09:22:14.706955 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:22:26 crc kubenswrapper[4761]: I0307 09:22:26.707184 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:22:26 crc kubenswrapper[4761]: E0307 09:22:26.708520 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:22:38 crc kubenswrapper[4761]: I0307 09:22:38.706119 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:22:38 crc kubenswrapper[4761]: E0307 09:22:38.706769 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:22:49 crc kubenswrapper[4761]: I0307 09:22:49.707310 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:22:49 crc kubenswrapper[4761]: E0307 09:22:49.708144 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:23:02 crc kubenswrapper[4761]: I0307 09:23:02.714657 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:23:02 crc kubenswrapper[4761]: E0307 09:23:02.719755 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:23:13 crc kubenswrapper[4761]: I0307 09:23:13.726511 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:23:13 crc kubenswrapper[4761]: E0307 09:23:13.727671 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:23:24 crc kubenswrapper[4761]: I0307 09:23:24.706654 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:23:24 crc kubenswrapper[4761]: E0307 09:23:24.707894 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:23:25 crc kubenswrapper[4761]: I0307 09:23:25.671399 4761 scope.go:117] "RemoveContainer" containerID="81cfe2e925e6f21f93bd135229819b95131acd12552e3ecc6934b0edcd5d0a68" Mar 07 09:23:35 crc kubenswrapper[4761]: I0307 09:23:35.705597 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:23:35 crc kubenswrapper[4761]: E0307 09:23:35.706425 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:23:50 crc kubenswrapper[4761]: I0307 09:23:50.706707 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:23:51 crc kubenswrapper[4761]: I0307 09:23:51.701280 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"a43ace93383b743eb2d6cd7f20bb40b06f6d768f904a91bafc3da780f93481ce"} Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.161766 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547924-pr254"] Mar 07 09:24:00 crc kubenswrapper[4761]: E0307 09:24:00.163113 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb4b54e0-9e87-43bd-99c1-dd0fb9027801" containerName="oc" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.163134 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb4b54e0-9e87-43bd-99c1-dd0fb9027801" containerName="oc" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.163479 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb4b54e0-9e87-43bd-99c1-dd0fb9027801" containerName="oc" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.164778 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547924-pr254" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.167237 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.167981 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.168126 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.179929 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547924-pr254"] Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.222149 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f744r\" (UniqueName: \"kubernetes.io/projected/402eb779-1735-4115-a306-00df8c5240aa-kube-api-access-f744r\") pod \"auto-csr-approver-29547924-pr254\" (UID: \"402eb779-1735-4115-a306-00df8c5240aa\") " pod="openshift-infra/auto-csr-approver-29547924-pr254" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.325690 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f744r\" (UniqueName: \"kubernetes.io/projected/402eb779-1735-4115-a306-00df8c5240aa-kube-api-access-f744r\") pod \"auto-csr-approver-29547924-pr254\" (UID: \"402eb779-1735-4115-a306-00df8c5240aa\") " pod="openshift-infra/auto-csr-approver-29547924-pr254" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.348202 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f744r\" (UniqueName: \"kubernetes.io/projected/402eb779-1735-4115-a306-00df8c5240aa-kube-api-access-f744r\") pod \"auto-csr-approver-29547924-pr254\" (UID: \"402eb779-1735-4115-a306-00df8c5240aa\") " pod="openshift-infra/auto-csr-approver-29547924-pr254" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.486835 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547924-pr254" Mar 07 09:24:01 crc kubenswrapper[4761]: I0307 09:24:01.081531 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547924-pr254"] Mar 07 09:24:01 crc kubenswrapper[4761]: I0307 09:24:01.839777 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547924-pr254" event={"ID":"402eb779-1735-4115-a306-00df8c5240aa","Type":"ContainerStarted","Data":"84dd3e634f7f2f75f5b7a20754632fcaf57421f3d755fe8ca4cb893c6741667a"} Mar 07 09:24:02 crc kubenswrapper[4761]: I0307 09:24:02.854883 4761 generic.go:334] "Generic (PLEG): container finished" podID="402eb779-1735-4115-a306-00df8c5240aa" containerID="8693c5f8a7641fc04ea6fca2b5174f2bc562b7f8b1848e27635f2da9f77fd7f4" exitCode=0 Mar 07 09:24:02 crc kubenswrapper[4761]: I0307 09:24:02.854998 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547924-pr254" event={"ID":"402eb779-1735-4115-a306-00df8c5240aa","Type":"ContainerDied","Data":"8693c5f8a7641fc04ea6fca2b5174f2bc562b7f8b1848e27635f2da9f77fd7f4"} Mar 07 09:24:04 crc kubenswrapper[4761]: I0307 09:24:04.377413 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547924-pr254" Mar 07 09:24:04 crc kubenswrapper[4761]: I0307 09:24:04.533394 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f744r\" (UniqueName: \"kubernetes.io/projected/402eb779-1735-4115-a306-00df8c5240aa-kube-api-access-f744r\") pod \"402eb779-1735-4115-a306-00df8c5240aa\" (UID: \"402eb779-1735-4115-a306-00df8c5240aa\") " Mar 07 09:24:04 crc kubenswrapper[4761]: I0307 09:24:04.538614 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402eb779-1735-4115-a306-00df8c5240aa-kube-api-access-f744r" (OuterVolumeSpecName: "kube-api-access-f744r") pod "402eb779-1735-4115-a306-00df8c5240aa" (UID: "402eb779-1735-4115-a306-00df8c5240aa"). InnerVolumeSpecName "kube-api-access-f744r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:24:04 crc kubenswrapper[4761]: I0307 09:24:04.636330 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f744r\" (UniqueName: \"kubernetes.io/projected/402eb779-1735-4115-a306-00df8c5240aa-kube-api-access-f744r\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:04 crc kubenswrapper[4761]: I0307 09:24:04.887693 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547924-pr254" event={"ID":"402eb779-1735-4115-a306-00df8c5240aa","Type":"ContainerDied","Data":"84dd3e634f7f2f75f5b7a20754632fcaf57421f3d755fe8ca4cb893c6741667a"} Mar 07 09:24:04 crc kubenswrapper[4761]: I0307 09:24:04.887801 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547924-pr254" Mar 07 09:24:04 crc kubenswrapper[4761]: I0307 09:24:04.887826 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84dd3e634f7f2f75f5b7a20754632fcaf57421f3d755fe8ca4cb893c6741667a" Mar 07 09:24:05 crc kubenswrapper[4761]: I0307 09:24:05.449550 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547918-24jnd"] Mar 07 09:24:05 crc kubenswrapper[4761]: I0307 09:24:05.465364 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547918-24jnd"] Mar 07 09:24:05 crc kubenswrapper[4761]: I0307 09:24:05.719378 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca0bc391-b6d1-4d68-a5dd-047c1f5f5009" path="/var/lib/kubelet/pods/ca0bc391-b6d1-4d68-a5dd-047c1f5f5009/volumes" Mar 07 09:24:25 crc kubenswrapper[4761]: I0307 09:24:25.770771 4761 scope.go:117] "RemoveContainer" containerID="603911c1614386a9b343d6f7ef703fbd8a5bf73eba19af2e1b950c8408682339" Mar 07 09:24:29 crc kubenswrapper[4761]: I0307 09:24:29.274539 4761 generic.go:334] "Generic (PLEG): container finished" podID="cf1a0263-2849-4fc3-a733-eebca0481aae" containerID="3843e59e15646ab966087faa2ca0e895e0d384887c6b0a13b92f60562c3c3edb" exitCode=1 Mar 07 09:24:29 crc kubenswrapper[4761]: I0307 09:24:29.275127 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"cf1a0263-2849-4fc3-a733-eebca0481aae","Type":"ContainerDied","Data":"3843e59e15646ab966087faa2ca0e895e0d384887c6b0a13b92f60562c3c3edb"} Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.787732 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887109 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-temporary\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887173 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887209 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ca-certs\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887231 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config-secret\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887265 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ssh-key\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887363 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887457 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-workdir\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887558 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgjsd\" (UniqueName: \"kubernetes.io/projected/cf1a0263-2849-4fc3-a733-eebca0481aae-kube-api-access-cgjsd\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887633 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-config-data\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887776 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.888322 4761 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.890305 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-config-data" (OuterVolumeSpecName: "config-data") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.895150 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1a0263-2849-4fc3-a733-eebca0481aae-kube-api-access-cgjsd" (OuterVolumeSpecName: "kube-api-access-cgjsd") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "kube-api-access-cgjsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.895573 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.897007 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.934147 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.936693 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.944310 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.991227 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.992172 4761 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.992283 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgjsd\" (UniqueName: \"kubernetes.io/projected/cf1a0263-2849-4fc3-a733-eebca0481aae-kube-api-access-cgjsd\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.992365 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.992937 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.993012 4761 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.993079 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.993185 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.993239 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:31 crc kubenswrapper[4761]: I0307 09:24:31.030709 4761 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Mar 07 09:24:31 crc kubenswrapper[4761]: I0307 09:24:31.095619 4761 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:31 crc kubenswrapper[4761]: I0307 09:24:31.311671 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"cf1a0263-2849-4fc3-a733-eebca0481aae","Type":"ContainerDied","Data":"c89bee12915462bdfe58e59bd9de049cee18dd7969563e4c2bcb042bce31866a"} Mar 07 09:24:31 crc kubenswrapper[4761]: I0307 09:24:31.311797 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c89bee12915462bdfe58e59bd9de049cee18dd7969563e4c2bcb042bce31866a" Mar 07 09:24:31 crc kubenswrapper[4761]: I0307 09:24:31.311921 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.773600 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 07 09:24:33 crc kubenswrapper[4761]: E0307 09:24:33.774753 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402eb779-1735-4115-a306-00df8c5240aa" containerName="oc" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.774767 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="402eb779-1735-4115-a306-00df8c5240aa" containerName="oc" Mar 07 09:24:33 crc kubenswrapper[4761]: E0307 09:24:33.774794 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1a0263-2849-4fc3-a733-eebca0481aae" containerName="tempest-tests-tempest-tests-runner" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.774800 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1a0263-2849-4fc3-a733-eebca0481aae" containerName="tempest-tests-tempest-tests-runner" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.775056 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="402eb779-1735-4115-a306-00df8c5240aa" containerName="oc" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.775075 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1a0263-2849-4fc3-a733-eebca0481aae" containerName="tempest-tests-tempest-tests-runner" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.775945 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.778900 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pgk27" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.786138 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.964883 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5vnf\" (UniqueName: \"kubernetes.io/projected/03e65954-2a26-4e66-b033-a57a384097f1-kube-api-access-k5vnf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"03e65954-2a26-4e66-b033-a57a384097f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.964958 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"03e65954-2a26-4e66-b033-a57a384097f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:34 crc kubenswrapper[4761]: I0307 09:24:34.068447 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5vnf\" (UniqueName: \"kubernetes.io/projected/03e65954-2a26-4e66-b033-a57a384097f1-kube-api-access-k5vnf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"03e65954-2a26-4e66-b033-a57a384097f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:34 crc kubenswrapper[4761]: I0307 09:24:34.068539 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"03e65954-2a26-4e66-b033-a57a384097f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:34 crc kubenswrapper[4761]: I0307 09:24:34.069985 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"03e65954-2a26-4e66-b033-a57a384097f1\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:34 crc kubenswrapper[4761]: I0307 09:24:34.088701 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5vnf\" (UniqueName: \"kubernetes.io/projected/03e65954-2a26-4e66-b033-a57a384097f1-kube-api-access-k5vnf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"03e65954-2a26-4e66-b033-a57a384097f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:34 crc kubenswrapper[4761]: I0307 09:24:34.106926 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"03e65954-2a26-4e66-b033-a57a384097f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:34 crc kubenswrapper[4761]: I0307 09:24:34.399496 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:34 crc kubenswrapper[4761]: I0307 09:24:34.959368 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 07 09:24:35 crc kubenswrapper[4761]: I0307 09:24:35.360892 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"03e65954-2a26-4e66-b033-a57a384097f1","Type":"ContainerStarted","Data":"c1a096ecc193e81416de2621d52fe2034d513a993bd5ec1299971cd824c9066a"} Mar 07 09:24:36 crc kubenswrapper[4761]: I0307 09:24:36.372279 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"03e65954-2a26-4e66-b033-a57a384097f1","Type":"ContainerStarted","Data":"e3ed2dd23bca57721cb33528bc907160f9aa20f75eea69ed2c612c9d2fa14126"} Mar 07 09:24:36 crc kubenswrapper[4761]: I0307 09:24:36.393663 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.277248852 podStartE2EDuration="3.393646763s" podCreationTimestamp="2026-03-07 09:24:33 +0000 UTC" firstStartedPulling="2026-03-07 09:24:34.966918413 +0000 UTC m=+5731.876084888" lastFinishedPulling="2026-03-07 09:24:36.083316314 +0000 UTC m=+5732.992482799" observedRunningTime="2026-03-07 09:24:36.386242959 +0000 UTC m=+5733.295409444" watchObservedRunningTime="2026-03-07 09:24:36.393646763 +0000 UTC m=+5733.302813228" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.435041 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ns4hc/must-gather-7h8wv"] Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.438258 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.440496 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ns4hc"/"default-dockercfg-fdr8d" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.440656 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ns4hc"/"kube-root-ca.crt" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.441325 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ns4hc"/"openshift-service-ca.crt" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.518288 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ns4hc/must-gather-7h8wv"] Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.532090 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-must-gather-output\") pod \"must-gather-7h8wv\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.532492 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mr44\" (UniqueName: \"kubernetes.io/projected/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-kube-api-access-6mr44\") pod \"must-gather-7h8wv\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.634772 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-must-gather-output\") pod \"must-gather-7h8wv\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.634899 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mr44\" (UniqueName: \"kubernetes.io/projected/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-kube-api-access-6mr44\") pod \"must-gather-7h8wv\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.635442 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-must-gather-output\") pod \"must-gather-7h8wv\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.658815 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mr44\" (UniqueName: \"kubernetes.io/projected/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-kube-api-access-6mr44\") pod \"must-gather-7h8wv\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.754856 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:30 crc kubenswrapper[4761]: I0307 09:25:30.335840 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ns4hc/must-gather-7h8wv"] Mar 07 09:25:31 crc kubenswrapper[4761]: I0307 09:25:31.110604 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" event={"ID":"6e76b73c-a01e-4d4a-9574-8db8b23c3adb","Type":"ContainerStarted","Data":"ac8f516884616bab0cedf42dcfbf78c540ef5e7c907558b0c2b776eb258a3ed6"} Mar 07 09:25:39 crc kubenswrapper[4761]: I0307 09:25:39.227293 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" event={"ID":"6e76b73c-a01e-4d4a-9574-8db8b23c3adb","Type":"ContainerStarted","Data":"b4656b8ea524827ca8cf95b0f649a630118cb3e3a497912fed259248ebe052d6"} Mar 07 09:25:40 crc kubenswrapper[4761]: I0307 09:25:40.244788 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" event={"ID":"6e76b73c-a01e-4d4a-9574-8db8b23c3adb","Type":"ContainerStarted","Data":"a0bee0769ff56fd8f09ce4d6d57f3b219d70c968a5870f25aa904b98bfb31fb0"} Mar 07 09:25:40 crc kubenswrapper[4761]: I0307 09:25:40.274501 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" podStartSLOduration=2.847597243 podStartE2EDuration="11.274467891s" podCreationTimestamp="2026-03-07 09:25:29 +0000 UTC" firstStartedPulling="2026-03-07 09:25:30.342999277 +0000 UTC m=+5787.252165762" lastFinishedPulling="2026-03-07 09:25:38.769869945 +0000 UTC m=+5795.679036410" observedRunningTime="2026-03-07 09:25:40.26270048 +0000 UTC m=+5797.171866995" watchObservedRunningTime="2026-03-07 09:25:40.274467891 +0000 UTC m=+5797.183634406" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.661016 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-fldxc"] Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.662910 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.744736 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-host\") pod \"crc-debug-fldxc\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.744791 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55mmk\" (UniqueName: \"kubernetes.io/projected/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-kube-api-access-55mmk\") pod \"crc-debug-fldxc\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.846885 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-host\") pod \"crc-debug-fldxc\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.846992 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55mmk\" (UniqueName: \"kubernetes.io/projected/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-kube-api-access-55mmk\") pod \"crc-debug-fldxc\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.847810 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-host\") pod \"crc-debug-fldxc\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.871606 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55mmk\" (UniqueName: \"kubernetes.io/projected/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-kube-api-access-55mmk\") pod \"crc-debug-fldxc\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.997331 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:47 crc kubenswrapper[4761]: I0307 09:25:47.329022 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" event={"ID":"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb","Type":"ContainerStarted","Data":"15d12ecd3d6a423268124368f555e338ee274e58571843b143b9a0fce23998de"} Mar 07 09:25:59 crc kubenswrapper[4761]: I0307 09:25:59.458261 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" event={"ID":"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb","Type":"ContainerStarted","Data":"068ebde8a61c2f74f529aeef190e5f95bd0742a64866071d76a4d30cec4aa5c1"} Mar 07 09:25:59 crc kubenswrapper[4761]: I0307 09:25:59.479686 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" podStartSLOduration=2.835782564 podStartE2EDuration="14.479667427s" podCreationTimestamp="2026-03-07 09:25:45 +0000 UTC" firstStartedPulling="2026-03-07 09:25:46.842549259 +0000 UTC m=+5803.751715774" lastFinishedPulling="2026-03-07 09:25:58.486434142 +0000 UTC m=+5815.395600637" observedRunningTime="2026-03-07 09:25:59.474265263 +0000 UTC m=+5816.383431738" watchObservedRunningTime="2026-03-07 09:25:59.479667427 +0000 UTC m=+5816.388833902" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.160414 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547926-mpcnk"] Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.162692 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.165676 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.166660 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.166892 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.173524 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547926-mpcnk"] Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.246858 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hkqk\" (UniqueName: \"kubernetes.io/projected/1f1ce531-a112-4c72-8d81-051bccb5e911-kube-api-access-6hkqk\") pod \"auto-csr-approver-29547926-mpcnk\" (UID: \"1f1ce531-a112-4c72-8d81-051bccb5e911\") " pod="openshift-infra/auto-csr-approver-29547926-mpcnk" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.348877 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hkqk\" (UniqueName: \"kubernetes.io/projected/1f1ce531-a112-4c72-8d81-051bccb5e911-kube-api-access-6hkqk\") pod \"auto-csr-approver-29547926-mpcnk\" (UID: \"1f1ce531-a112-4c72-8d81-051bccb5e911\") " pod="openshift-infra/auto-csr-approver-29547926-mpcnk" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.373002 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hkqk\" (UniqueName: \"kubernetes.io/projected/1f1ce531-a112-4c72-8d81-051bccb5e911-kube-api-access-6hkqk\") pod \"auto-csr-approver-29547926-mpcnk\" (UID: \"1f1ce531-a112-4c72-8d81-051bccb5e911\") " pod="openshift-infra/auto-csr-approver-29547926-mpcnk" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.480532 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" Mar 07 09:26:01 crc kubenswrapper[4761]: W0307 09:26:01.321026 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f1ce531_a112_4c72_8d81_051bccb5e911.slice/crio-05695d527af3cf8e2f126aa92469000266b129294baa00b8aa6300146d13ac1e WatchSource:0}: Error finding container 05695d527af3cf8e2f126aa92469000266b129294baa00b8aa6300146d13ac1e: Status 404 returned error can't find the container with id 05695d527af3cf8e2f126aa92469000266b129294baa00b8aa6300146d13ac1e Mar 07 09:26:01 crc kubenswrapper[4761]: I0307 09:26:01.321457 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547926-mpcnk"] Mar 07 09:26:01 crc kubenswrapper[4761]: I0307 09:26:01.480333 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" event={"ID":"1f1ce531-a112-4c72-8d81-051bccb5e911","Type":"ContainerStarted","Data":"05695d527af3cf8e2f126aa92469000266b129294baa00b8aa6300146d13ac1e"} Mar 07 09:26:03 crc kubenswrapper[4761]: I0307 09:26:03.508417 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" event={"ID":"1f1ce531-a112-4c72-8d81-051bccb5e911","Type":"ContainerStarted","Data":"c3859f1ed361967d75da9f67dc1dc6e93509a205363c3c250c7054b10952f11a"} Mar 07 09:26:03 crc kubenswrapper[4761]: I0307 09:26:03.527343 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" podStartSLOduration=2.7378170600000002 podStartE2EDuration="3.527326714s" podCreationTimestamp="2026-03-07 09:26:00 +0000 UTC" firstStartedPulling="2026-03-07 09:26:01.32361778 +0000 UTC m=+5818.232784255" lastFinishedPulling="2026-03-07 09:26:02.113127434 +0000 UTC m=+5819.022293909" observedRunningTime="2026-03-07 09:26:03.524363821 +0000 UTC m=+5820.433530316" watchObservedRunningTime="2026-03-07 09:26:03.527326714 +0000 UTC m=+5820.436493189" Mar 07 09:26:05 crc kubenswrapper[4761]: I0307 09:26:05.538067 4761 generic.go:334] "Generic (PLEG): container finished" podID="1f1ce531-a112-4c72-8d81-051bccb5e911" containerID="c3859f1ed361967d75da9f67dc1dc6e93509a205363c3c250c7054b10952f11a" exitCode=0 Mar 07 09:26:05 crc kubenswrapper[4761]: I0307 09:26:05.538146 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" event={"ID":"1f1ce531-a112-4c72-8d81-051bccb5e911","Type":"ContainerDied","Data":"c3859f1ed361967d75da9f67dc1dc6e93509a205363c3c250c7054b10952f11a"} Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.240580 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.337946 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hkqk\" (UniqueName: \"kubernetes.io/projected/1f1ce531-a112-4c72-8d81-051bccb5e911-kube-api-access-6hkqk\") pod \"1f1ce531-a112-4c72-8d81-051bccb5e911\" (UID: \"1f1ce531-a112-4c72-8d81-051bccb5e911\") " Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.352370 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f1ce531-a112-4c72-8d81-051bccb5e911-kube-api-access-6hkqk" (OuterVolumeSpecName: "kube-api-access-6hkqk") pod "1f1ce531-a112-4c72-8d81-051bccb5e911" (UID: "1f1ce531-a112-4c72-8d81-051bccb5e911"). InnerVolumeSpecName "kube-api-access-6hkqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.449854 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hkqk\" (UniqueName: \"kubernetes.io/projected/1f1ce531-a112-4c72-8d81-051bccb5e911-kube-api-access-6hkqk\") on node \"crc\" DevicePath \"\"" Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.565325 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" event={"ID":"1f1ce531-a112-4c72-8d81-051bccb5e911","Type":"ContainerDied","Data":"05695d527af3cf8e2f126aa92469000266b129294baa00b8aa6300146d13ac1e"} Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.565373 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05695d527af3cf8e2f126aa92469000266b129294baa00b8aa6300146d13ac1e" Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.565413 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.619316 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547920-sj747"] Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.632228 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547920-sj747"] Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.720600 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582dd6f3-adc8-4933-b406-bd096570fbbf" path="/var/lib/kubelet/pods/582dd6f3-adc8-4933-b406-bd096570fbbf/volumes" Mar 07 09:26:13 crc kubenswrapper[4761]: I0307 09:26:13.769462 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:26:13 crc kubenswrapper[4761]: I0307 09:26:13.769980 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:26:25 crc kubenswrapper[4761]: I0307 09:26:25.878296 4761 scope.go:117] "RemoveContainer" containerID="861955b25ecb851fdf0a059445979f1c93ff558bd5e94bf49b3f0234932445b2" Mar 07 09:26:43 crc kubenswrapper[4761]: I0307 09:26:43.768808 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:26:43 crc kubenswrapper[4761]: I0307 09:26:43.769564 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:26:45 crc kubenswrapper[4761]: I0307 09:26:45.055479 4761 generic.go:334] "Generic (PLEG): container finished" podID="7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb" containerID="068ebde8a61c2f74f529aeef190e5f95bd0742a64866071d76a4d30cec4aa5c1" exitCode=0 Mar 07 09:26:45 crc kubenswrapper[4761]: I0307 09:26:45.055567 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" event={"ID":"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb","Type":"ContainerDied","Data":"068ebde8a61c2f74f529aeef190e5f95bd0742a64866071d76a4d30cec4aa5c1"} Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.221629 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.265884 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-fldxc"] Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.280565 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-fldxc"] Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.370170 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55mmk\" (UniqueName: \"kubernetes.io/projected/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-kube-api-access-55mmk\") pod \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.370339 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-host\") pod \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.370474 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-host" (OuterVolumeSpecName: "host") pod "7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb" (UID: "7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.370926 4761 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-host\") on node \"crc\" DevicePath \"\"" Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.378939 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-kube-api-access-55mmk" (OuterVolumeSpecName: "kube-api-access-55mmk") pod "7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb" (UID: "7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb"). InnerVolumeSpecName "kube-api-access-55mmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.473094 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55mmk\" (UniqueName: \"kubernetes.io/projected/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-kube-api-access-55mmk\") on node \"crc\" DevicePath \"\"" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.082677 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15d12ecd3d6a423268124368f555e338ee274e58571843b143b9a0fce23998de" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.083032 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.558646 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-n6kc2"] Mar 07 09:26:47 crc kubenswrapper[4761]: E0307 09:26:47.559337 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1ce531-a112-4c72-8d81-051bccb5e911" containerName="oc" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.559350 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1ce531-a112-4c72-8d81-051bccb5e911" containerName="oc" Mar 07 09:26:47 crc kubenswrapper[4761]: E0307 09:26:47.559373 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb" containerName="container-00" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.559379 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb" containerName="container-00" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.559615 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb" containerName="container-00" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.559630 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f1ce531-a112-4c72-8d81-051bccb5e911" containerName="oc" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.561344 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.699289 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-host\") pod \"crc-debug-n6kc2\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.699646 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp6nh\" (UniqueName: \"kubernetes.io/projected/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-kube-api-access-sp6nh\") pod \"crc-debug-n6kc2\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.716352 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb" path="/var/lib/kubelet/pods/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb/volumes" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.802219 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-host\") pod \"crc-debug-n6kc2\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.802392 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp6nh\" (UniqueName: \"kubernetes.io/projected/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-kube-api-access-sp6nh\") pod \"crc-debug-n6kc2\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.802600 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-host\") pod \"crc-debug-n6kc2\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.834739 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp6nh\" (UniqueName: \"kubernetes.io/projected/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-kube-api-access-sp6nh\") pod \"crc-debug-n6kc2\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.877476 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:48 crc kubenswrapper[4761]: I0307 09:26:48.120663 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" event={"ID":"f56a42f4-7bbd-492e-b92d-8ea3c127b37e","Type":"ContainerStarted","Data":"d13aeb44313443ac4241765cefa7b68cd540e930be3a2d2051a038f54f59100d"} Mar 07 09:26:49 crc kubenswrapper[4761]: I0307 09:26:49.131371 4761 generic.go:334] "Generic (PLEG): container finished" podID="f56a42f4-7bbd-492e-b92d-8ea3c127b37e" containerID="2df4b03d893e63c7b20aa7be594280b87a5b14b26b7ff213014869b7fe4ee9d7" exitCode=0 Mar 07 09:26:49 crc kubenswrapper[4761]: I0307 09:26:49.131451 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" event={"ID":"f56a42f4-7bbd-492e-b92d-8ea3c127b37e","Type":"ContainerDied","Data":"2df4b03d893e63c7b20aa7be594280b87a5b14b26b7ff213014869b7fe4ee9d7"} Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.263763 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.366563 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp6nh\" (UniqueName: \"kubernetes.io/projected/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-kube-api-access-sp6nh\") pod \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.366885 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-host\") pod \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.367040 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-host" (OuterVolumeSpecName: "host") pod "f56a42f4-7bbd-492e-b92d-8ea3c127b37e" (UID: "f56a42f4-7bbd-492e-b92d-8ea3c127b37e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.367471 4761 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-host\") on node \"crc\" DevicePath \"\"" Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.376999 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-kube-api-access-sp6nh" (OuterVolumeSpecName: "kube-api-access-sp6nh") pod "f56a42f4-7bbd-492e-b92d-8ea3c127b37e" (UID: "f56a42f4-7bbd-492e-b92d-8ea3c127b37e"). InnerVolumeSpecName "kube-api-access-sp6nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.470021 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp6nh\" (UniqueName: \"kubernetes.io/projected/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-kube-api-access-sp6nh\") on node \"crc\" DevicePath \"\"" Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.477480 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-n6kc2"] Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.489747 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-n6kc2"] Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.160683 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d13aeb44313443ac4241765cefa7b68cd540e930be3a2d2051a038f54f59100d" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.160763 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.720109 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f56a42f4-7bbd-492e-b92d-8ea3c127b37e" path="/var/lib/kubelet/pods/f56a42f4-7bbd-492e-b92d-8ea3c127b37e/volumes" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.763583 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-2zbln"] Mar 07 09:26:51 crc kubenswrapper[4761]: E0307 09:26:51.764094 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56a42f4-7bbd-492e-b92d-8ea3c127b37e" containerName="container-00" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.764111 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56a42f4-7bbd-492e-b92d-8ea3c127b37e" containerName="container-00" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.764748 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56a42f4-7bbd-492e-b92d-8ea3c127b37e" containerName="container-00" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.765617 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.797546 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d3132db-b2e3-481a-8024-1dc814064f93-host\") pod \"crc-debug-2zbln\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.797733 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvmfs\" (UniqueName: \"kubernetes.io/projected/9d3132db-b2e3-481a-8024-1dc814064f93-kube-api-access-tvmfs\") pod \"crc-debug-2zbln\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.899990 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d3132db-b2e3-481a-8024-1dc814064f93-host\") pod \"crc-debug-2zbln\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.900062 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvmfs\" (UniqueName: \"kubernetes.io/projected/9d3132db-b2e3-481a-8024-1dc814064f93-kube-api-access-tvmfs\") pod \"crc-debug-2zbln\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.900179 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d3132db-b2e3-481a-8024-1dc814064f93-host\") pod \"crc-debug-2zbln\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:52 crc kubenswrapper[4761]: I0307 09:26:52.302376 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvmfs\" (UniqueName: \"kubernetes.io/projected/9d3132db-b2e3-481a-8024-1dc814064f93-kube-api-access-tvmfs\") pod \"crc-debug-2zbln\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:52 crc kubenswrapper[4761]: I0307 09:26:52.380780 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:53 crc kubenswrapper[4761]: I0307 09:26:53.182865 4761 generic.go:334] "Generic (PLEG): container finished" podID="9d3132db-b2e3-481a-8024-1dc814064f93" containerID="0bb8fa2218c85726a6fd6eac2ae34706b3de602a070608dc110b362f1de67d81" exitCode=0 Mar 07 09:26:53 crc kubenswrapper[4761]: I0307 09:26:53.182959 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/crc-debug-2zbln" event={"ID":"9d3132db-b2e3-481a-8024-1dc814064f93","Type":"ContainerDied","Data":"0bb8fa2218c85726a6fd6eac2ae34706b3de602a070608dc110b362f1de67d81"} Mar 07 09:26:53 crc kubenswrapper[4761]: I0307 09:26:53.183217 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/crc-debug-2zbln" event={"ID":"9d3132db-b2e3-481a-8024-1dc814064f93","Type":"ContainerStarted","Data":"5b9e5ece56e907442532fd22b2ae57ad7423272f00ba0d2feadf6d6104dc69b9"} Mar 07 09:26:53 crc kubenswrapper[4761]: I0307 09:26:53.225238 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-2zbln"] Mar 07 09:26:53 crc kubenswrapper[4761]: I0307 09:26:53.235663 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-2zbln"] Mar 07 09:26:54 crc kubenswrapper[4761]: I0307 09:26:54.831485 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:54 crc kubenswrapper[4761]: I0307 09:26:54.990252 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvmfs\" (UniqueName: \"kubernetes.io/projected/9d3132db-b2e3-481a-8024-1dc814064f93-kube-api-access-tvmfs\") pod \"9d3132db-b2e3-481a-8024-1dc814064f93\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " Mar 07 09:26:54 crc kubenswrapper[4761]: I0307 09:26:54.990367 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d3132db-b2e3-481a-8024-1dc814064f93-host\") pod \"9d3132db-b2e3-481a-8024-1dc814064f93\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " Mar 07 09:26:54 crc kubenswrapper[4761]: I0307 09:26:54.990537 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3132db-b2e3-481a-8024-1dc814064f93-host" (OuterVolumeSpecName: "host") pod "9d3132db-b2e3-481a-8024-1dc814064f93" (UID: "9d3132db-b2e3-481a-8024-1dc814064f93"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 09:26:54 crc kubenswrapper[4761]: I0307 09:26:54.991280 4761 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d3132db-b2e3-481a-8024-1dc814064f93-host\") on node \"crc\" DevicePath \"\"" Mar 07 09:26:55 crc kubenswrapper[4761]: I0307 09:26:55.000067 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3132db-b2e3-481a-8024-1dc814064f93-kube-api-access-tvmfs" (OuterVolumeSpecName: "kube-api-access-tvmfs") pod "9d3132db-b2e3-481a-8024-1dc814064f93" (UID: "9d3132db-b2e3-481a-8024-1dc814064f93"). InnerVolumeSpecName "kube-api-access-tvmfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:26:55 crc kubenswrapper[4761]: I0307 09:26:55.094026 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvmfs\" (UniqueName: \"kubernetes.io/projected/9d3132db-b2e3-481a-8024-1dc814064f93-kube-api-access-tvmfs\") on node \"crc\" DevicePath \"\"" Mar 07 09:26:55 crc kubenswrapper[4761]: I0307 09:26:55.209904 4761 scope.go:117] "RemoveContainer" containerID="0bb8fa2218c85726a6fd6eac2ae34706b3de602a070608dc110b362f1de67d81" Mar 07 09:26:55 crc kubenswrapper[4761]: I0307 09:26:55.209952 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:55 crc kubenswrapper[4761]: I0307 09:26:55.721612 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3132db-b2e3-481a-8024-1dc814064f93" path="/var/lib/kubelet/pods/9d3132db-b2e3-481a-8024-1dc814064f93/volumes" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.189890 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-szvxt"] Mar 07 09:27:02 crc kubenswrapper[4761]: E0307 09:27:02.233440 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d3132db-b2e3-481a-8024-1dc814064f93" containerName="container-00" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.233492 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d3132db-b2e3-481a-8024-1dc814064f93" containerName="container-00" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.234312 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d3132db-b2e3-481a-8024-1dc814064f93" containerName="container-00" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.246002 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.254370 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-szvxt"] Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.382214 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-utilities\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.382319 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-catalog-content\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.382802 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf8hq\" (UniqueName: \"kubernetes.io/projected/7fd9bad1-b55f-4359-b071-cac65fc84a66-kube-api-access-gf8hq\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.484428 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf8hq\" (UniqueName: \"kubernetes.io/projected/7fd9bad1-b55f-4359-b071-cac65fc84a66-kube-api-access-gf8hq\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.484534 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-utilities\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.484589 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-catalog-content\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.485094 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-utilities\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.485155 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-catalog-content\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.517928 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf8hq\" (UniqueName: \"kubernetes.io/projected/7fd9bad1-b55f-4359-b071-cac65fc84a66-kube-api-access-gf8hq\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.574801 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:03 crc kubenswrapper[4761]: I0307 09:27:03.219317 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-szvxt"] Mar 07 09:27:03 crc kubenswrapper[4761]: I0307 09:27:03.364590 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szvxt" event={"ID":"7fd9bad1-b55f-4359-b071-cac65fc84a66","Type":"ContainerStarted","Data":"91d991ededd51d4566930bccf440fd4fa4edef095a5b32961c15a7bbca0027d0"} Mar 07 09:27:04 crc kubenswrapper[4761]: I0307 09:27:04.376576 4761 generic.go:334] "Generic (PLEG): container finished" podID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerID="f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4" exitCode=0 Mar 07 09:27:04 crc kubenswrapper[4761]: I0307 09:27:04.376899 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szvxt" event={"ID":"7fd9bad1-b55f-4359-b071-cac65fc84a66","Type":"ContainerDied","Data":"f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4"} Mar 07 09:27:04 crc kubenswrapper[4761]: I0307 09:27:04.379451 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:27:05 crc kubenswrapper[4761]: I0307 09:27:05.390188 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szvxt" event={"ID":"7fd9bad1-b55f-4359-b071-cac65fc84a66","Type":"ContainerStarted","Data":"aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a"} Mar 07 09:27:07 crc kubenswrapper[4761]: I0307 09:27:07.416053 4761 generic.go:334] "Generic (PLEG): container finished" podID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerID="aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a" exitCode=0 Mar 07 09:27:07 crc kubenswrapper[4761]: I0307 09:27:07.416160 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szvxt" event={"ID":"7fd9bad1-b55f-4359-b071-cac65fc84a66","Type":"ContainerDied","Data":"aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a"} Mar 07 09:27:08 crc kubenswrapper[4761]: I0307 09:27:08.428451 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szvxt" event={"ID":"7fd9bad1-b55f-4359-b071-cac65fc84a66","Type":"ContainerStarted","Data":"449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76"} Mar 07 09:27:08 crc kubenswrapper[4761]: I0307 09:27:08.451980 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-szvxt" podStartSLOduration=3.033024299 podStartE2EDuration="6.4519607s" podCreationTimestamp="2026-03-07 09:27:02 +0000 UTC" firstStartedPulling="2026-03-07 09:27:04.37913787 +0000 UTC m=+5881.288304365" lastFinishedPulling="2026-03-07 09:27:07.798074281 +0000 UTC m=+5884.707240766" observedRunningTime="2026-03-07 09:27:08.443010499 +0000 UTC m=+5885.352176974" watchObservedRunningTime="2026-03-07 09:27:08.4519607 +0000 UTC m=+5885.361127175" Mar 07 09:27:12 crc kubenswrapper[4761]: I0307 09:27:12.575737 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:12 crc kubenswrapper[4761]: I0307 09:27:12.576172 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:13 crc kubenswrapper[4761]: I0307 09:27:13.635515 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-szvxt" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="registry-server" probeResult="failure" output=< Mar 07 09:27:13 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:27:13 crc kubenswrapper[4761]: > Mar 07 09:27:13 crc kubenswrapper[4761]: I0307 09:27:13.770052 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:27:13 crc kubenswrapper[4761]: I0307 09:27:13.770122 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:27:13 crc kubenswrapper[4761]: I0307 09:27:13.770177 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 09:27:13 crc kubenswrapper[4761]: I0307 09:27:13.771194 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a43ace93383b743eb2d6cd7f20bb40b06f6d768f904a91bafc3da780f93481ce"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:27:13 crc kubenswrapper[4761]: I0307 09:27:13.771251 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://a43ace93383b743eb2d6cd7f20bb40b06f6d768f904a91bafc3da780f93481ce" gracePeriod=600 Mar 07 09:27:14 crc kubenswrapper[4761]: I0307 09:27:14.496287 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="a43ace93383b743eb2d6cd7f20bb40b06f6d768f904a91bafc3da780f93481ce" exitCode=0 Mar 07 09:27:14 crc kubenswrapper[4761]: I0307 09:27:14.496339 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"a43ace93383b743eb2d6cd7f20bb40b06f6d768f904a91bafc3da780f93481ce"} Mar 07 09:27:14 crc kubenswrapper[4761]: I0307 09:27:14.496650 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7"} Mar 07 09:27:14 crc kubenswrapper[4761]: I0307 09:27:14.496672 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:27:22 crc kubenswrapper[4761]: I0307 09:27:22.651157 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:22 crc kubenswrapper[4761]: I0307 09:27:22.727842 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:22 crc kubenswrapper[4761]: I0307 09:27:22.913192 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-szvxt"] Mar 07 09:27:24 crc kubenswrapper[4761]: I0307 09:27:24.630181 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-szvxt" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="registry-server" containerID="cri-o://449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76" gracePeriod=2 Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.369901 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.514066 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-catalog-content\") pod \"7fd9bad1-b55f-4359-b071-cac65fc84a66\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.514239 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-utilities\") pod \"7fd9bad1-b55f-4359-b071-cac65fc84a66\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.514429 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf8hq\" (UniqueName: \"kubernetes.io/projected/7fd9bad1-b55f-4359-b071-cac65fc84a66-kube-api-access-gf8hq\") pod \"7fd9bad1-b55f-4359-b071-cac65fc84a66\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.515095 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-utilities" (OuterVolumeSpecName: "utilities") pod "7fd9bad1-b55f-4359-b071-cac65fc84a66" (UID: "7fd9bad1-b55f-4359-b071-cac65fc84a66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.524667 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd9bad1-b55f-4359-b071-cac65fc84a66-kube-api-access-gf8hq" (OuterVolumeSpecName: "kube-api-access-gf8hq") pod "7fd9bad1-b55f-4359-b071-cac65fc84a66" (UID: "7fd9bad1-b55f-4359-b071-cac65fc84a66"). InnerVolumeSpecName "kube-api-access-gf8hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.618501 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.618553 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf8hq\" (UniqueName: \"kubernetes.io/projected/7fd9bad1-b55f-4359-b071-cac65fc84a66-kube-api-access-gf8hq\") on node \"crc\" DevicePath \"\"" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.619397 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fd9bad1-b55f-4359-b071-cac65fc84a66" (UID: "7fd9bad1-b55f-4359-b071-cac65fc84a66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.641694 4761 generic.go:334] "Generic (PLEG): container finished" podID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerID="449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76" exitCode=0 Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.641749 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szvxt" event={"ID":"7fd9bad1-b55f-4359-b071-cac65fc84a66","Type":"ContainerDied","Data":"449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76"} Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.641791 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szvxt" event={"ID":"7fd9bad1-b55f-4359-b071-cac65fc84a66","Type":"ContainerDied","Data":"91d991ededd51d4566930bccf440fd4fa4edef095a5b32961c15a7bbca0027d0"} Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.641795 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.641816 4761 scope.go:117] "RemoveContainer" containerID="449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.684801 4761 scope.go:117] "RemoveContainer" containerID="aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.695587 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-szvxt"] Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.720396 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.724856 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-szvxt"] Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.726546 4761 scope.go:117] "RemoveContainer" containerID="f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.790444 4761 scope.go:117] "RemoveContainer" containerID="449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76" Mar 07 09:27:25 crc kubenswrapper[4761]: E0307 09:27:25.792485 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76\": container with ID starting with 449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76 not found: ID does not exist" containerID="449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.792541 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76"} err="failed to get container status \"449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76\": rpc error: code = NotFound desc = could not find container \"449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76\": container with ID starting with 449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76 not found: ID does not exist" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.792623 4761 scope.go:117] "RemoveContainer" containerID="aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a" Mar 07 09:27:25 crc kubenswrapper[4761]: E0307 09:27:25.792993 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a\": container with ID starting with aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a not found: ID does not exist" containerID="aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.793029 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a"} err="failed to get container status \"aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a\": rpc error: code = NotFound desc = could not find container \"aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a\": container with ID starting with aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a not found: ID does not exist" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.793051 4761 scope.go:117] "RemoveContainer" containerID="f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4" Mar 07 09:27:25 crc kubenswrapper[4761]: E0307 09:27:25.793353 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4\": container with ID starting with f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4 not found: ID does not exist" containerID="f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.793440 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4"} err="failed to get container status \"f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4\": rpc error: code = NotFound desc = could not find container \"f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4\": container with ID starting with f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4 not found: ID does not exist" Mar 07 09:27:27 crc kubenswrapper[4761]: I0307 09:27:27.734366 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" path="/var/lib/kubelet/pods/7fd9bad1-b55f-4359-b071-cac65fc84a66/volumes" Mar 07 09:27:34 crc kubenswrapper[4761]: I0307 09:27:34.442157 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff/aodh-api/0.log" Mar 07 09:27:35 crc kubenswrapper[4761]: I0307 09:27:35.546768 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff/aodh-evaluator/0.log" Mar 07 09:27:35 crc kubenswrapper[4761]: I0307 09:27:35.560558 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff/aodh-listener/0.log" Mar 07 09:27:35 crc kubenswrapper[4761]: I0307 09:27:35.588363 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff/aodh-notifier/0.log" Mar 07 09:27:35 crc kubenswrapper[4761]: I0307 09:27:35.763922 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5ccfb69fc8-m454z_43376e1e-1806-4f20-a05f-fe74fee5d843/barbican-api/0.log" Mar 07 09:27:35 crc kubenswrapper[4761]: I0307 09:27:35.773829 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5ccfb69fc8-m454z_43376e1e-1806-4f20-a05f-fe74fee5d843/barbican-api-log/0.log" Mar 07 09:27:35 crc kubenswrapper[4761]: I0307 09:27:35.849502 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c8db699f6-9j9k4_04f251ce-e592-4a42-a918-314ea2722d03/barbican-keystone-listener/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.080908 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c8db699f6-9j9k4_04f251ce-e592-4a42-a918-314ea2722d03/barbican-keystone-listener-log/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.100458 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-59f545954f-l958x_7d4575c8-a02a-4eb3-9a4c-be82914374f7/barbican-worker-log/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.115798 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-59f545954f-l958x_7d4575c8-a02a-4eb3-9a4c-be82914374f7/barbican-worker/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.327974 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m_27f66d5b-c359-480d-9bb8-02447507d3ca/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.331796 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bdde810-6429-4553-a9bb-1ccef1f89e2d/ceilometer-central-agent/1.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.535058 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bdde810-6429-4553-a9bb-1ccef1f89e2d/sg-core/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.542693 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bdde810-6429-4553-a9bb-1ccef1f89e2d/ceilometer-notification-agent/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.570610 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bdde810-6429-4553-a9bb-1ccef1f89e2d/proxy-httpd/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.598193 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bdde810-6429-4553-a9bb-1ccef1f89e2d/ceilometer-central-agent/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.802064 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_42f2382e-b335-47f4-8345-8544853fb91a/cinder-api/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.802227 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_42f2382e-b335-47f4-8345-8544853fb91a/cinder-api-log/0.log" Mar 07 09:27:37 crc kubenswrapper[4761]: I0307 09:27:37.770870 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_69ab7bc1-753e-437c-bd70-130581863fde/cinder-scheduler/0.log" Mar 07 09:27:37 crc kubenswrapper[4761]: I0307 09:27:37.825074 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_69ab7bc1-753e-437c-bd70-130581863fde/cinder-scheduler/1.log" Mar 07 09:27:37 crc kubenswrapper[4761]: I0307 09:27:37.858424 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_69ab7bc1-753e-437c-bd70-130581863fde/probe/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.041468 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h_c36e1db2-a57f-46b3-9271-7ba8586fc8b2/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.095479 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh_0e72d6d8-c8fb-4093-9395-c3de682b7aa9/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.257435 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-rjbxk_3322ce20-e09c-4b31-add3-d54b0a38fbae/init/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.441707 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-rjbxk_3322ce20-e09c-4b31-add3-d54b0a38fbae/init/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.522108 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-rjbxk_3322ce20-e09c-4b31-add3-d54b0a38fbae/dnsmasq-dns/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.537693 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5_1ee12ec5-76cf-4824-9882-d55c16a3c08e/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.778407 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f78969ff-e84a-4fed-8d3d-21688ae544c7/glance-log/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.844283 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f78969ff-e84a-4fed-8d3d-21688ae544c7/glance-httpd/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.911990 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7dfba149-bd76-4537-a488-ef2606ba2d9b/glance-httpd/0.log" Mar 07 09:27:39 crc kubenswrapper[4761]: I0307 09:27:39.001951 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7dfba149-bd76-4537-a488-ef2606ba2d9b/glance-log/0.log" Mar 07 09:27:39 crc kubenswrapper[4761]: I0307 09:27:39.551587 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5d698bbbb-b4tpc_c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4/heat-api/0.log" Mar 07 09:27:39 crc kubenswrapper[4761]: I0307 09:27:39.738474 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7764c87546-svl8g_2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12/heat-engine/0.log" Mar 07 09:27:39 crc kubenswrapper[4761]: I0307 09:27:39.757448 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg_927c98b8-4e9f-41dc-9faa-fef8e98a71d2/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:39 crc kubenswrapper[4761]: I0307 09:27:39.814268 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7d497d755f-jwccr_3336529a-b93c-46c9-844b-337e4ef49f98/heat-cfnapi/0.log" Mar 07 09:27:39 crc kubenswrapper[4761]: I0307 09:27:39.994489 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-t7m5g_0e1e8856-bbd9-4931-af28-f508ce15b034/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:40 crc kubenswrapper[4761]: I0307 09:27:40.075920 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29547901-b7kzn_b0d8c848-14d6-46c1-a912-87673a3d974a/keystone-cron/0.log" Mar 07 09:27:40 crc kubenswrapper[4761]: I0307 09:27:40.270807 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ed86dd3e-17e0-467b-8243-8209a04dcbe1/kube-state-metrics/0.log" Mar 07 09:27:40 crc kubenswrapper[4761]: I0307 09:27:40.356945 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-8t687_becfd5e1-5c42-4a2c-83ca-bd7f02855288/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:40 crc kubenswrapper[4761]: I0307 09:27:40.579180 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-28hkt_92c65649-010f-4704-8069-ee58f1d7d383/logging-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:40 crc kubenswrapper[4761]: I0307 09:27:40.786353 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_6feb98fd-961e-4495-9ff4-8bafdd080e31/mysqld-exporter/0.log" Mar 07 09:27:41 crc kubenswrapper[4761]: I0307 09:27:41.197512 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-69d7d999d5-z6jzw_ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d/neutron-httpd/0.log" Mar 07 09:27:41 crc kubenswrapper[4761]: I0307 09:27:41.252916 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-668988d5d5-hwhxv_e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6/keystone-api/0.log" Mar 07 09:27:41 crc kubenswrapper[4761]: I0307 09:27:41.304440 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-69d7d999d5-z6jzw_ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d/neutron-api/0.log" Mar 07 09:27:41 crc kubenswrapper[4761]: I0307 09:27:41.382515 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6_27ac2fbd-f084-4103-97aa-45c01a3aea2a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:42 crc kubenswrapper[4761]: I0307 09:27:42.096877 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_af14fdad-b14e-465d-bd67-6f5f89f87d45/nova-cell0-conductor-conductor/0.log" Mar 07 09:27:42 crc kubenswrapper[4761]: I0307 09:27:42.387199 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c12aff9a-a09d-4da9-8a3d-d59591060f22/nova-api-log/0.log" Mar 07 09:27:42 crc kubenswrapper[4761]: I0307 09:27:42.484674 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_993e0457-91eb-4234-ad39-0855846b8d31/nova-cell1-conductor-conductor/0.log" Mar 07 09:27:42 crc kubenswrapper[4761]: I0307 09:27:42.698552 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ff986583-4706-47fa-9fec-eb503de7cac1/nova-cell1-novncproxy-novncproxy/0.log" Mar 07 09:27:42 crc kubenswrapper[4761]: I0307 09:27:42.775728 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-p44m9_46b536e5-c591-42d8-8903-51e4078bfa09/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:42 crc kubenswrapper[4761]: I0307 09:27:42.798197 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c12aff9a-a09d-4da9-8a3d-d59591060f22/nova-api-api/0.log" Mar 07 09:27:42 crc kubenswrapper[4761]: I0307 09:27:42.979753 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_34c23fbf-c0a4-4b0e-bc41-e23eab413801/nova-metadata-log/0.log" Mar 07 09:27:43 crc kubenswrapper[4761]: I0307 09:27:43.257441 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f0ccb6a-6367-409b-b996-4946fa2c8981/mysql-bootstrap/0.log" Mar 07 09:27:43 crc kubenswrapper[4761]: I0307 09:27:43.284557 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6517c184-4de2-40f1-a808-90030b11e0a9/nova-scheduler-scheduler/0.log" Mar 07 09:27:43 crc kubenswrapper[4761]: I0307 09:27:43.453083 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f0ccb6a-6367-409b-b996-4946fa2c8981/mysql-bootstrap/0.log" Mar 07 09:27:43 crc kubenswrapper[4761]: I0307 09:27:43.537679 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f0ccb6a-6367-409b-b996-4946fa2c8981/galera/1.log" Mar 07 09:27:43 crc kubenswrapper[4761]: I0307 09:27:43.571745 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f0ccb6a-6367-409b-b996-4946fa2c8981/galera/0.log" Mar 07 09:27:43 crc kubenswrapper[4761]: I0307 09:27:43.749806 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe/mysql-bootstrap/0.log" Mar 07 09:27:43 crc kubenswrapper[4761]: I0307 09:27:43.968353 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe/mysql-bootstrap/0.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.041125 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe/galera/0.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.046025 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe/galera/1.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.280637 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_212a33ff-09a0-4654-adff-687f8d9145a6/openstackclient/0.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.448808 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-p5vt2_a6c2f90d-fff9-4f86-b1c4-432d76275714/openstack-network-exporter/0.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.587449 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-blwhr_7edcf92b-670b-42be-bea0-082d948e2bef/ovsdb-server-init/0.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.801183 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-blwhr_7edcf92b-670b-42be-bea0-082d948e2bef/ovs-vswitchd/0.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.808652 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-blwhr_7edcf92b-670b-42be-bea0-082d948e2bef/ovsdb-server-init/0.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.811323 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-blwhr_7edcf92b-670b-42be-bea0-082d948e2bef/ovsdb-server/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.022291 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-wq5n6_9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d/ovn-controller/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.136366 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_34c23fbf-c0a4-4b0e-bc41-e23eab413801/nova-metadata-metadata/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.238962 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xx9pc_f1b69a5f-4327-4ef7-a28d-a638e579ea5d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.336743 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f12e8753-c20a-460e-a4a6-a69f604df651/ovn-northd/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.360184 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f12e8753-c20a-460e-a4a6-a69f604df651/openstack-network-exporter/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.501077 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_97d68716-6a14-491d-8f4c-c3884ce45af4/openstack-network-exporter/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.561533 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_97d68716-6a14-491d-8f4c-c3884ce45af4/ovsdbserver-nb/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.738938 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8327390a-a37e-4c5f-9662-88cd5b832a3d/openstack-network-exporter/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.747820 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8327390a-a37e-4c5f-9662-88cd5b832a3d/ovsdbserver-sb/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.994971 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-84bcb6db96-7gd85_ae33121e-ffd0-48c2-b440-384ae5683dce/placement-api/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.057013 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-84bcb6db96-7gd85_ae33121e-ffd0-48c2-b440-384ae5683dce/placement-log/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.067619 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_526b9328-0f86-4c3d-9a27-116742cee11a/init-config-reloader/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.249149 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_526b9328-0f86-4c3d-9a27-116742cee11a/init-config-reloader/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.264952 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_526b9328-0f86-4c3d-9a27-116742cee11a/prometheus/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.271223 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_526b9328-0f86-4c3d-9a27-116742cee11a/thanos-sidecar/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.333781 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_526b9328-0f86-4c3d-9a27-116742cee11a/config-reloader/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.472264 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ee9f03ce-b3a6-440c-8b34-16c66dac3e00/setup-container/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.640741 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ee9f03ce-b3a6-440c-8b34-16c66dac3e00/setup-container/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.683278 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ee9f03ce-b3a6-440c-8b34-16c66dac3e00/rabbitmq/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.761064 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b857c4b2-5d07-434c-aeb0-7189b087b650/setup-container/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.975667 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b857c4b2-5d07-434c-aeb0-7189b087b650/rabbitmq/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.012212 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc/setup-container/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.022976 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b857c4b2-5d07-434c-aeb0-7189b087b650/setup-container/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.277161 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc/setup-container/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.356438 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_894f6ffc-2563-49a6-913d-6b0b83a70fa3/setup-container/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.376316 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc/rabbitmq/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.497350 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_894f6ffc-2563-49a6-913d-6b0b83a70fa3/setup-container/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.603437 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_894f6ffc-2563-49a6-913d-6b0b83a70fa3/rabbitmq/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.646844 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h_3aa544e2-be60-4e2a-9d61-1634fbf51479/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.835728 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-h7pjk_7fb04149-6828-4d2d-ae60-8425380b1219/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.882533 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm_8c31bde2-d536-45b0-88c5-966abe8f4e1c/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.120905 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-62nh6_bff456cc-066d-4ffe-a805-cd7a82d7d6e1/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.127347 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-hvs2h_c64904be-c7ab-4389-8efc-1fa8d0b25c20/ssh-known-hosts-edpm-deployment/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.418784 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-858bf88ddc-crlf2_bcbcfcf2-9d9b-4087-aed7-1109de6d07ec/proxy-server/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.523328 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-jqk77_34132cc8-6037-4a17-9a58-5736caf6130b/swift-ring-rebalance/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.590192 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-858bf88ddc-crlf2_bcbcfcf2-9d9b-4087-aed7-1109de6d07ec/proxy-httpd/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.680708 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/account-auditor/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.734242 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/account-reaper/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.885080 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/account-server/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.890643 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/account-replicator/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.907287 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/container-auditor/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.024900 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/container-replicator/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.067734 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/container-updater/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.184189 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/container-server/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.215309 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/object-auditor/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.446283 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/object-expirer/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.491846 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/object-replicator/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.544586 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/object-updater/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.621872 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/object-server/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.646120 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/rsync/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.814906 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/swift-recon-cron/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.949694 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp_79854881-fc6e-4976-b6c3-ac4f5fa42340/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:50 crc kubenswrapper[4761]: I0307 09:27:50.082175 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb_729bd1e7-c268-4327-b30b-3f946a06775e/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:50 crc kubenswrapper[4761]: I0307 09:27:50.290059 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_03e65954-2a26-4e66-b033-a57a384097f1/test-operator-logs-container/0.log" Mar 07 09:27:50 crc kubenswrapper[4761]: I0307 09:27:50.455666 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-q92t4_69faf2be-decb-4f75-be02-7f0d23bea59a/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:50 crc kubenswrapper[4761]: I0307 09:27:50.663974 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_cf1a0263-2849-4fc3-a733-eebca0481aae/tempest-tests-tempest-tests-runner/0.log" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.141997 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547928-ncg8q"] Mar 07 09:28:00 crc kubenswrapper[4761]: E0307 09:28:00.143332 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="registry-server" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.143350 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="registry-server" Mar 07 09:28:00 crc kubenswrapper[4761]: E0307 09:28:00.143395 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="extract-content" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.143403 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="extract-content" Mar 07 09:28:00 crc kubenswrapper[4761]: E0307 09:28:00.143438 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="extract-utilities" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.143483 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="extract-utilities" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.143856 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="registry-server" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.144996 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.146951 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.148392 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.148564 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.157952 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547928-ncg8q"] Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.181175 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsxs2\" (UniqueName: \"kubernetes.io/projected/643aaaef-6add-469a-9741-96a3088eeebe-kube-api-access-bsxs2\") pod \"auto-csr-approver-29547928-ncg8q\" (UID: \"643aaaef-6add-469a-9741-96a3088eeebe\") " pod="openshift-infra/auto-csr-approver-29547928-ncg8q" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.283505 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsxs2\" (UniqueName: \"kubernetes.io/projected/643aaaef-6add-469a-9741-96a3088eeebe-kube-api-access-bsxs2\") pod \"auto-csr-approver-29547928-ncg8q\" (UID: \"643aaaef-6add-469a-9741-96a3088eeebe\") " pod="openshift-infra/auto-csr-approver-29547928-ncg8q" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.310594 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsxs2\" (UniqueName: \"kubernetes.io/projected/643aaaef-6add-469a-9741-96a3088eeebe-kube-api-access-bsxs2\") pod \"auto-csr-approver-29547928-ncg8q\" (UID: \"643aaaef-6add-469a-9741-96a3088eeebe\") " pod="openshift-infra/auto-csr-approver-29547928-ncg8q" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.346946 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d4e95617-c055-4b9f-ac38-32a41c2e8846/memcached/0.log" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.472156 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" Mar 07 09:28:01 crc kubenswrapper[4761]: I0307 09:28:01.200741 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547928-ncg8q"] Mar 07 09:28:01 crc kubenswrapper[4761]: W0307 09:28:01.623550 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod643aaaef_6add_469a_9741_96a3088eeebe.slice/crio-74cada4a5ddc02d9a8d9a8943c95e5f342c5d4c3c5ce77694251e5b029543773 WatchSource:0}: Error finding container 74cada4a5ddc02d9a8d9a8943c95e5f342c5d4c3c5ce77694251e5b029543773: Status 404 returned error can't find the container with id 74cada4a5ddc02d9a8d9a8943c95e5f342c5d4c3c5ce77694251e5b029543773 Mar 07 09:28:02 crc kubenswrapper[4761]: I0307 09:28:02.064105 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" event={"ID":"643aaaef-6add-469a-9741-96a3088eeebe","Type":"ContainerStarted","Data":"74cada4a5ddc02d9a8d9a8943c95e5f342c5d4c3c5ce77694251e5b029543773"} Mar 07 09:28:03 crc kubenswrapper[4761]: I0307 09:28:03.121158 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" event={"ID":"643aaaef-6add-469a-9741-96a3088eeebe","Type":"ContainerStarted","Data":"9b4b81d8423533fc4e2033c7d32b905359122fcff103ec0b3ff63b0694bbb96c"} Mar 07 09:28:03 crc kubenswrapper[4761]: I0307 09:28:03.153589 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" podStartSLOduration=2.344644951 podStartE2EDuration="3.15334838s" podCreationTimestamp="2026-03-07 09:28:00 +0000 UTC" firstStartedPulling="2026-03-07 09:28:01.628004311 +0000 UTC m=+5938.537170836" lastFinishedPulling="2026-03-07 09:28:02.43670779 +0000 UTC m=+5939.345874265" observedRunningTime="2026-03-07 09:28:03.139058976 +0000 UTC m=+5940.048225451" watchObservedRunningTime="2026-03-07 09:28:03.15334838 +0000 UTC m=+5940.062514855" Mar 07 09:28:04 crc kubenswrapper[4761]: I0307 09:28:04.133968 4761 generic.go:334] "Generic (PLEG): container finished" podID="643aaaef-6add-469a-9741-96a3088eeebe" containerID="9b4b81d8423533fc4e2033c7d32b905359122fcff103ec0b3ff63b0694bbb96c" exitCode=0 Mar 07 09:28:04 crc kubenswrapper[4761]: I0307 09:28:04.134063 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" event={"ID":"643aaaef-6add-469a-9741-96a3088eeebe","Type":"ContainerDied","Data":"9b4b81d8423533fc4e2033c7d32b905359122fcff103ec0b3ff63b0694bbb96c"} Mar 07 09:28:05 crc kubenswrapper[4761]: I0307 09:28:05.572883 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" Mar 07 09:28:05 crc kubenswrapper[4761]: I0307 09:28:05.713748 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsxs2\" (UniqueName: \"kubernetes.io/projected/643aaaef-6add-469a-9741-96a3088eeebe-kube-api-access-bsxs2\") pod \"643aaaef-6add-469a-9741-96a3088eeebe\" (UID: \"643aaaef-6add-469a-9741-96a3088eeebe\") " Mar 07 09:28:05 crc kubenswrapper[4761]: I0307 09:28:05.728994 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/643aaaef-6add-469a-9741-96a3088eeebe-kube-api-access-bsxs2" (OuterVolumeSpecName: "kube-api-access-bsxs2") pod "643aaaef-6add-469a-9741-96a3088eeebe" (UID: "643aaaef-6add-469a-9741-96a3088eeebe"). InnerVolumeSpecName "kube-api-access-bsxs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:28:05 crc kubenswrapper[4761]: I0307 09:28:05.816407 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsxs2\" (UniqueName: \"kubernetes.io/projected/643aaaef-6add-469a-9741-96a3088eeebe-kube-api-access-bsxs2\") on node \"crc\" DevicePath \"\"" Mar 07 09:28:06 crc kubenswrapper[4761]: I0307 09:28:06.155272 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" event={"ID":"643aaaef-6add-469a-9741-96a3088eeebe","Type":"ContainerDied","Data":"74cada4a5ddc02d9a8d9a8943c95e5f342c5d4c3c5ce77694251e5b029543773"} Mar 07 09:28:06 crc kubenswrapper[4761]: I0307 09:28:06.155764 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74cada4a5ddc02d9a8d9a8943c95e5f342c5d4c3c5ce77694251e5b029543773" Mar 07 09:28:06 crc kubenswrapper[4761]: I0307 09:28:06.155336 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" Mar 07 09:28:06 crc kubenswrapper[4761]: I0307 09:28:06.195670 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547922-mbgmv"] Mar 07 09:28:06 crc kubenswrapper[4761]: I0307 09:28:06.211196 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547922-mbgmv"] Mar 07 09:28:07 crc kubenswrapper[4761]: I0307 09:28:07.719643 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb4b54e0-9e87-43bd-99c1-dd0fb9027801" path="/var/lib/kubelet/pods/cb4b54e0-9e87-43bd-99c1-dd0fb9027801/volumes" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.454507 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wqqcf"] Mar 07 09:28:19 crc kubenswrapper[4761]: E0307 09:28:19.455748 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643aaaef-6add-469a-9741-96a3088eeebe" containerName="oc" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.455768 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="643aaaef-6add-469a-9741-96a3088eeebe" containerName="oc" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.456165 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="643aaaef-6add-469a-9741-96a3088eeebe" containerName="oc" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.458495 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.480400 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wqqcf"] Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.621830 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdhs2\" (UniqueName: \"kubernetes.io/projected/76e4569f-8115-4835-b5ae-3a923dc1d966-kube-api-access-hdhs2\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.622409 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-utilities\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.622785 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-catalog-content\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.724771 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-utilities\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.724855 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-catalog-content\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.724924 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdhs2\" (UniqueName: \"kubernetes.io/projected/76e4569f-8115-4835-b5ae-3a923dc1d966-kube-api-access-hdhs2\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.725410 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-utilities\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.725426 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-catalog-content\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.747320 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdhs2\" (UniqueName: \"kubernetes.io/projected/76e4569f-8115-4835-b5ae-3a923dc1d966-kube-api-access-hdhs2\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.786768 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:20 crc kubenswrapper[4761]: I0307 09:28:20.304998 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wqqcf"] Mar 07 09:28:20 crc kubenswrapper[4761]: I0307 09:28:20.333257 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqqcf" event={"ID":"76e4569f-8115-4835-b5ae-3a923dc1d966","Type":"ContainerStarted","Data":"ff9296646d1566a22e445fb48ee8f75bcece950f605d8623b1b503a482413d4a"} Mar 07 09:28:21 crc kubenswrapper[4761]: I0307 09:28:21.347264 4761 generic.go:334] "Generic (PLEG): container finished" podID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerID="f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953" exitCode=0 Mar 07 09:28:21 crc kubenswrapper[4761]: I0307 09:28:21.347319 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqqcf" event={"ID":"76e4569f-8115-4835-b5ae-3a923dc1d966","Type":"ContainerDied","Data":"f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953"} Mar 07 09:28:22 crc kubenswrapper[4761]: I0307 09:28:22.361835 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqqcf" event={"ID":"76e4569f-8115-4835-b5ae-3a923dc1d966","Type":"ContainerStarted","Data":"b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a"} Mar 07 09:28:23 crc kubenswrapper[4761]: I0307 09:28:23.117325 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv_9c633896-8e1e-4395-afb6-a94b40ef9e66/util/0.log" Mar 07 09:28:23 crc kubenswrapper[4761]: I0307 09:28:23.398343 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv_9c633896-8e1e-4395-afb6-a94b40ef9e66/util/0.log" Mar 07 09:28:23 crc kubenswrapper[4761]: I0307 09:28:23.420285 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv_9c633896-8e1e-4395-afb6-a94b40ef9e66/pull/0.log" Mar 07 09:28:24 crc kubenswrapper[4761]: I0307 09:28:24.265413 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv_9c633896-8e1e-4395-afb6-a94b40ef9e66/pull/0.log" Mar 07 09:28:24 crc kubenswrapper[4761]: I0307 09:28:24.384306 4761 generic.go:334] "Generic (PLEG): container finished" podID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerID="b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a" exitCode=0 Mar 07 09:28:24 crc kubenswrapper[4761]: I0307 09:28:24.384341 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqqcf" event={"ID":"76e4569f-8115-4835-b5ae-3a923dc1d966","Type":"ContainerDied","Data":"b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a"} Mar 07 09:28:24 crc kubenswrapper[4761]: I0307 09:28:24.485911 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv_9c633896-8e1e-4395-afb6-a94b40ef9e66/util/0.log" Mar 07 09:28:24 crc kubenswrapper[4761]: I0307 09:28:24.553485 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv_9c633896-8e1e-4395-afb6-a94b40ef9e66/pull/0.log" Mar 07 09:28:24 crc kubenswrapper[4761]: I0307 09:28:24.710309 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv_9c633896-8e1e-4395-afb6-a94b40ef9e66/extract/0.log" Mar 07 09:28:25 crc kubenswrapper[4761]: I0307 09:28:25.312673 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-mxh22_90a2f442-aea1-44ac-bbb8-ba58c0969806/manager/1.log" Mar 07 09:28:25 crc kubenswrapper[4761]: I0307 09:28:25.396269 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqqcf" event={"ID":"76e4569f-8115-4835-b5ae-3a923dc1d966","Type":"ContainerStarted","Data":"dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713"} Mar 07 09:28:25 crc kubenswrapper[4761]: I0307 09:28:25.436114 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wqqcf" podStartSLOduration=3.007008408 podStartE2EDuration="6.43609575s" podCreationTimestamp="2026-03-07 09:28:19 +0000 UTC" firstStartedPulling="2026-03-07 09:28:21.349893949 +0000 UTC m=+5958.259060424" lastFinishedPulling="2026-03-07 09:28:24.778981291 +0000 UTC m=+5961.688147766" observedRunningTime="2026-03-07 09:28:25.417212313 +0000 UTC m=+5962.326378798" watchObservedRunningTime="2026-03-07 09:28:25.43609575 +0000 UTC m=+5962.345262225" Mar 07 09:28:25 crc kubenswrapper[4761]: I0307 09:28:25.583979 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-mxh22_90a2f442-aea1-44ac-bbb8-ba58c0969806/manager/0.log" Mar 07 09:28:26 crc kubenswrapper[4761]: I0307 09:28:26.148992 4761 scope.go:117] "RemoveContainer" containerID="9cdb2507f999d812dc16f4ffd1e90008e63bd681015d6c2edcad8e1db5068010" Mar 07 09:28:26 crc kubenswrapper[4761]: I0307 09:28:26.735586 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-vv8sh_a4bc9370-c64d-4e5e-a0bd-70297abb8c0d/manager/1.log" Mar 07 09:28:27 crc kubenswrapper[4761]: I0307 09:28:27.063167 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-vv8sh_a4bc9370-c64d-4e5e-a0bd-70297abb8c0d/manager/0.log" Mar 07 09:28:27 crc kubenswrapper[4761]: I0307 09:28:27.258668 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-pnxcz_0ce5a055-df90-4071-a5cf-f7361e01e5fe/manager/1.log" Mar 07 09:28:27 crc kubenswrapper[4761]: I0307 09:28:27.657660 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-pnxcz_0ce5a055-df90-4071-a5cf-f7361e01e5fe/manager/0.log" Mar 07 09:28:27 crc kubenswrapper[4761]: I0307 09:28:27.765226 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-78bc7f9bd9-9wqmf_3b477f52-57ee-4037-af3a-fa987453bdf2/manager/0.log" Mar 07 09:28:28 crc kubenswrapper[4761]: I0307 09:28:28.359313 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-5gtdw_9dcfc7f8-35e7-4fab-bb7a-c900caf10641/manager/1.log" Mar 07 09:28:28 crc kubenswrapper[4761]: I0307 09:28:28.528200 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-5gtdw_9dcfc7f8-35e7-4fab-bb7a-c900caf10641/manager/0.log" Mar 07 09:28:28 crc kubenswrapper[4761]: I0307 09:28:28.685735 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5995f4446f-zp8ch_6bdda9de-4711-4fbc-b9d2-5f867691450a/manager/0.log" Mar 07 09:28:28 crc kubenswrapper[4761]: I0307 09:28:28.745190 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-vx8wn_9554e552-2329-4e93-835e-9dbcad7b7519/manager/0.log" Mar 07 09:28:28 crc kubenswrapper[4761]: I0307 09:28:28.996391 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-bh54b_2db89b29-3889-4242-9ede-98140f3f8319/manager/1.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.021594 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7c789f89c6-l9ztx_baefa6a4-53d3-4158-a74f-87c9b766d760/manager/0.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.155979 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-bh54b_2db89b29-3889-4242-9ede-98140f3f8319/manager/0.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.292798 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b6bfb6475-c79kh_0febfb54-7188-4247-8d9b-2f166bf597ee/manager/1.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.446576 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b6bfb6475-c79kh_0febfb54-7188-4247-8d9b-2f166bf597ee/manager/0.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.495229 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-lgkvz_0bfdda94-7f9c-45d0-897f-0b65cf16e0fd/manager/1.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.717661 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-45bp8_9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e/manager/1.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.721846 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-lgkvz_0bfdda94-7f9c-45d0-897f-0b65cf16e0fd/manager/0.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.786892 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.787873 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:30 crc kubenswrapper[4761]: I0307 09:28:30.143202 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-h9xzz_353016f5-6859-4193-9845-69bf540c7ab3/manager/1.log" Mar 07 09:28:30 crc kubenswrapper[4761]: I0307 09:28:30.202260 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-45bp8_9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e/manager/0.log" Mar 07 09:28:30 crc kubenswrapper[4761]: I0307 09:28:30.626286 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-h9xzz_353016f5-6859-4193-9845-69bf540c7ab3/manager/0.log" Mar 07 09:28:30 crc kubenswrapper[4761]: I0307 09:28:30.640243 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w_bd23eeaa-ed7e-45ea-9a40-613ac4e11120/manager/1.log" Mar 07 09:28:30 crc kubenswrapper[4761]: I0307 09:28:30.844921 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wqqcf" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="registry-server" probeResult="failure" output=< Mar 07 09:28:30 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:28:30 crc kubenswrapper[4761]: > Mar 07 09:28:30 crc kubenswrapper[4761]: I0307 09:28:30.852308 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w_bd23eeaa-ed7e-45ea-9a40-613ac4e11120/manager/0.log" Mar 07 09:28:30 crc kubenswrapper[4761]: I0307 09:28:30.974661 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-wvt5q_bf4af368-4dee-4a4a-8c43-fd7991ac3366/manager/1.log" Mar 07 09:28:31 crc kubenswrapper[4761]: I0307 09:28:31.099161 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-65ddc7ddc5-52tbc_6a6b6075-ec04-418f-ba28-09f11f19b78e/manager/1.log" Mar 07 09:28:31 crc kubenswrapper[4761]: I0307 09:28:31.119145 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6bfd49cd44-m98b8_b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6/operator/0.log" Mar 07 09:28:31 crc kubenswrapper[4761]: I0307 09:28:31.348194 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-j8w2n_69902561-929c-428a-8dab-7a9a91fb3084/registry-server/0.log" Mar 07 09:28:31 crc kubenswrapper[4761]: I0307 09:28:31.849677 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-cpn97_0a9a2953-a51f-42b6-8ff8-d3f860ff6377/manager/1.log" Mar 07 09:28:31 crc kubenswrapper[4761]: I0307 09:28:31.913515 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-cpn97_0a9a2953-a51f-42b6-8ff8-d3f860ff6377/manager/0.log" Mar 07 09:28:32 crc kubenswrapper[4761]: I0307 09:28:32.135560 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-xqhz5_6540426d-eaf7-4f8f-ab46-8305c545e1cb/manager/0.log" Mar 07 09:28:32 crc kubenswrapper[4761]: I0307 09:28:32.306426 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-6pvgm_ee7ca114-a92b-4ed8-99ec-5d5ab002dca0/operator/0.log" Mar 07 09:28:32 crc kubenswrapper[4761]: I0307 09:28:32.448761 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-spw5z_bc92e2bf-a093-4327-a1cd-807a2d916864/manager/0.log" Mar 07 09:28:32 crc kubenswrapper[4761]: I0307 09:28:32.790484 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-njxxc_7d43dfb0-643f-4e45-8e27-42b96b2c5ff9/manager/1.log" Mar 07 09:28:32 crc kubenswrapper[4761]: I0307 09:28:32.874227 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-njxxc_7d43dfb0-643f-4e45-8e27-42b96b2c5ff9/manager/0.log" Mar 07 09:28:33 crc kubenswrapper[4761]: I0307 09:28:33.144518 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-pg2pp_efa0b70d-ed5b-48ba-a601-bfc64689ed5a/manager/0.log" Mar 07 09:28:33 crc kubenswrapper[4761]: I0307 09:28:33.161645 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6ccb65d888-km2fj_6c6a959e-39ee-46ae-9cc5-03fe72cedb7a/manager/0.log" Mar 07 09:28:33 crc kubenswrapper[4761]: I0307 09:28:33.607372 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-65ddc7ddc5-52tbc_6a6b6075-ec04-418f-ba28-09f11f19b78e/manager/0.log" Mar 07 09:28:37 crc kubenswrapper[4761]: I0307 09:28:37.668598 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-wvt5q_bf4af368-4dee-4a4a-8c43-fd7991ac3366/manager/0.log" Mar 07 09:28:40 crc kubenswrapper[4761]: I0307 09:28:40.832866 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wqqcf" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="registry-server" probeResult="failure" output=< Mar 07 09:28:40 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:28:40 crc kubenswrapper[4761]: > Mar 07 09:28:49 crc kubenswrapper[4761]: I0307 09:28:49.877579 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:49 crc kubenswrapper[4761]: I0307 09:28:49.939288 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:50 crc kubenswrapper[4761]: I0307 09:28:50.654430 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wqqcf"] Mar 07 09:28:51 crc kubenswrapper[4761]: I0307 09:28:51.743550 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wqqcf" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="registry-server" containerID="cri-o://dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713" gracePeriod=2 Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.308158 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.335633 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-utilities\") pod \"76e4569f-8115-4835-b5ae-3a923dc1d966\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.335823 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdhs2\" (UniqueName: \"kubernetes.io/projected/76e4569f-8115-4835-b5ae-3a923dc1d966-kube-api-access-hdhs2\") pod \"76e4569f-8115-4835-b5ae-3a923dc1d966\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.335998 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-catalog-content\") pod \"76e4569f-8115-4835-b5ae-3a923dc1d966\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.337537 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-utilities" (OuterVolumeSpecName: "utilities") pod "76e4569f-8115-4835-b5ae-3a923dc1d966" (UID: "76e4569f-8115-4835-b5ae-3a923dc1d966"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.358702 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e4569f-8115-4835-b5ae-3a923dc1d966-kube-api-access-hdhs2" (OuterVolumeSpecName: "kube-api-access-hdhs2") pod "76e4569f-8115-4835-b5ae-3a923dc1d966" (UID: "76e4569f-8115-4835-b5ae-3a923dc1d966"). InnerVolumeSpecName "kube-api-access-hdhs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.428743 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76e4569f-8115-4835-b5ae-3a923dc1d966" (UID: "76e4569f-8115-4835-b5ae-3a923dc1d966"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.439979 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.440012 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.440025 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdhs2\" (UniqueName: \"kubernetes.io/projected/76e4569f-8115-4835-b5ae-3a923dc1d966-kube-api-access-hdhs2\") on node \"crc\" DevicePath \"\"" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.765301 4761 generic.go:334] "Generic (PLEG): container finished" podID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerID="dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713" exitCode=0 Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.765343 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqqcf" event={"ID":"76e4569f-8115-4835-b5ae-3a923dc1d966","Type":"ContainerDied","Data":"dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713"} Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.765378 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqqcf" event={"ID":"76e4569f-8115-4835-b5ae-3a923dc1d966","Type":"ContainerDied","Data":"ff9296646d1566a22e445fb48ee8f75bcece950f605d8623b1b503a482413d4a"} Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.765399 4761 scope.go:117] "RemoveContainer" containerID="dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.769189 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.789911 4761 scope.go:117] "RemoveContainer" containerID="b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.814130 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wqqcf"] Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.826948 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wqqcf"] Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.828079 4761 scope.go:117] "RemoveContainer" containerID="f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.892921 4761 scope.go:117] "RemoveContainer" containerID="dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713" Mar 07 09:28:52 crc kubenswrapper[4761]: E0307 09:28:52.893499 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713\": container with ID starting with dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713 not found: ID does not exist" containerID="dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.893551 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713"} err="failed to get container status \"dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713\": rpc error: code = NotFound desc = could not find container \"dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713\": container with ID starting with dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713 not found: ID does not exist" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.893582 4761 scope.go:117] "RemoveContainer" containerID="b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a" Mar 07 09:28:52 crc kubenswrapper[4761]: E0307 09:28:52.894628 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a\": container with ID starting with b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a not found: ID does not exist" containerID="b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.894706 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a"} err="failed to get container status \"b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a\": rpc error: code = NotFound desc = could not find container \"b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a\": container with ID starting with b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a not found: ID does not exist" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.894800 4761 scope.go:117] "RemoveContainer" containerID="f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953" Mar 07 09:28:52 crc kubenswrapper[4761]: E0307 09:28:52.895530 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953\": container with ID starting with f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953 not found: ID does not exist" containerID="f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.895581 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953"} err="failed to get container status \"f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953\": rpc error: code = NotFound desc = could not find container \"f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953\": container with ID starting with f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953 not found: ID does not exist" Mar 07 09:28:53 crc kubenswrapper[4761]: I0307 09:28:53.726042 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" path="/var/lib/kubelet/pods/76e4569f-8115-4835-b5ae-3a923dc1d966/volumes" Mar 07 09:28:54 crc kubenswrapper[4761]: I0307 09:28:54.625005 4761 trace.go:236] Trace[1246442193]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-1" (07-Mar-2026 09:28:53.579) (total time: 1043ms): Mar 07 09:28:54 crc kubenswrapper[4761]: Trace[1246442193]: [1.043586211s] [1.043586211s] END Mar 07 09:28:59 crc kubenswrapper[4761]: I0307 09:28:59.948371 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-fkrlf_9b718980-7c2c-4b0f-b605-331928c5a58e/control-plane-machine-set-operator/0.log" Mar 07 09:29:00 crc kubenswrapper[4761]: I0307 09:29:00.054500 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xqmxc_828a167b-cf1b-433c-844a-7ca236afd4b9/kube-rbac-proxy/0.log" Mar 07 09:29:00 crc kubenswrapper[4761]: I0307 09:29:00.103030 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xqmxc_828a167b-cf1b-433c-844a-7ca236afd4b9/machine-api-operator/0.log" Mar 07 09:29:16 crc kubenswrapper[4761]: I0307 09:29:16.759349 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-b26zv_abfb0a2a-4a92-4619-9335-3b8dcdda269d/cert-manager-controller/0.log" Mar 07 09:29:16 crc kubenswrapper[4761]: I0307 09:29:16.940876 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xg44s_cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2/cert-manager-cainjector/0.log" Mar 07 09:29:16 crc kubenswrapper[4761]: I0307 09:29:16.956776 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-98h6c_563c8932-7287-4158-bb9a-7f464230ae9f/cert-manager-webhook/0.log" Mar 07 09:29:33 crc kubenswrapper[4761]: I0307 09:29:33.261591 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-p788d_37e4e36d-77bd-4618-8b4d-4653a71a0f2e/nmstate-handler/0.log" Mar 07 09:29:33 crc kubenswrapper[4761]: I0307 09:29:33.292561 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-nhw26_b295a49c-b8ec-45ab-a04e-b08d9fafe91b/nmstate-console-plugin/0.log" Mar 07 09:29:33 crc kubenswrapper[4761]: I0307 09:29:33.471454 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-jmzd9_e9969064-2a65-4728-b9b2-8a02da45bacb/kube-rbac-proxy/0.log" Mar 07 09:29:33 crc kubenswrapper[4761]: I0307 09:29:33.495687 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-jmzd9_e9969064-2a65-4728-b9b2-8a02da45bacb/nmstate-metrics/0.log" Mar 07 09:29:33 crc kubenswrapper[4761]: I0307 09:29:33.601526 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-9894j_379eee65-d23d-4c2e-94fe-254d7069d0e6/nmstate-operator/0.log" Mar 07 09:29:33 crc kubenswrapper[4761]: I0307 09:29:33.719502 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-vrchq_fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3/nmstate-webhook/0.log" Mar 07 09:29:43 crc kubenswrapper[4761]: I0307 09:29:43.768766 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:29:43 crc kubenswrapper[4761]: I0307 09:29:43.769684 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:29:48 crc kubenswrapper[4761]: I0307 09:29:48.824341 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4c45cc-fmrsq_8a7603da-0d59-431b-82c9-59c887e9f8d6/kube-rbac-proxy/0.log" Mar 07 09:29:48 crc kubenswrapper[4761]: I0307 09:29:48.850545 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4c45cc-fmrsq_8a7603da-0d59-431b-82c9-59c887e9f8d6/manager/1.log" Mar 07 09:29:49 crc kubenswrapper[4761]: I0307 09:29:49.005766 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4c45cc-fmrsq_8a7603da-0d59-431b-82c9-59c887e9f8d6/manager/0.log" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.169084 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg"] Mar 07 09:30:00 crc kubenswrapper[4761]: E0307 09:30:00.170217 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="extract-content" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.170237 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="extract-content" Mar 07 09:30:00 crc kubenswrapper[4761]: E0307 09:30:00.170270 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="registry-server" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.170278 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="registry-server" Mar 07 09:30:00 crc kubenswrapper[4761]: E0307 09:30:00.170317 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="extract-utilities" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.170325 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="extract-utilities" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.170640 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="registry-server" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.171785 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.177103 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.177588 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.185012 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547930-wk7nk"] Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.187607 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.192516 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.192638 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.192824 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.202820 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547930-wk7nk"] Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.217963 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg"] Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.248292 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fhvc\" (UniqueName: \"kubernetes.io/projected/579abd64-02ee-47c8-b1ae-a7116434d46c-kube-api-access-2fhvc\") pod \"auto-csr-approver-29547930-wk7nk\" (UID: \"579abd64-02ee-47c8-b1ae-a7116434d46c\") " pod="openshift-infra/auto-csr-approver-29547930-wk7nk" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.248356 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-config-volume\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.248419 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-secret-volume\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.248456 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkwr4\" (UniqueName: \"kubernetes.io/projected/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-kube-api-access-tkwr4\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.352067 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-secret-volume\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.352178 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkwr4\" (UniqueName: \"kubernetes.io/projected/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-kube-api-access-tkwr4\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.352534 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fhvc\" (UniqueName: \"kubernetes.io/projected/579abd64-02ee-47c8-b1ae-a7116434d46c-kube-api-access-2fhvc\") pod \"auto-csr-approver-29547930-wk7nk\" (UID: \"579abd64-02ee-47c8-b1ae-a7116434d46c\") " pod="openshift-infra/auto-csr-approver-29547930-wk7nk" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.352633 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-config-volume\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.353801 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-config-volume\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.367934 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-secret-volume\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.372966 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fhvc\" (UniqueName: \"kubernetes.io/projected/579abd64-02ee-47c8-b1ae-a7116434d46c-kube-api-access-2fhvc\") pod \"auto-csr-approver-29547930-wk7nk\" (UID: \"579abd64-02ee-47c8-b1ae-a7116434d46c\") " pod="openshift-infra/auto-csr-approver-29547930-wk7nk" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.375187 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkwr4\" (UniqueName: \"kubernetes.io/projected/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-kube-api-access-tkwr4\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.495940 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.520869 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" Mar 07 09:30:01 crc kubenswrapper[4761]: I0307 09:30:01.002861 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547930-wk7nk"] Mar 07 09:30:01 crc kubenswrapper[4761]: I0307 09:30:01.088286 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg"] Mar 07 09:30:01 crc kubenswrapper[4761]: I0307 09:30:01.613432 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" event={"ID":"579abd64-02ee-47c8-b1ae-a7116434d46c","Type":"ContainerStarted","Data":"1e60b0110fa6e1e6e03b0605d11f478811fd96a0e5bbab8ef27f76365c46c98a"} Mar 07 09:30:01 crc kubenswrapper[4761]: I0307 09:30:01.615158 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" event={"ID":"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061","Type":"ContainerStarted","Data":"1589f355dd8201cbc0e824e8ca6c8be826390e9595d7c0189bf6e0b2204f53aa"} Mar 07 09:30:01 crc kubenswrapper[4761]: I0307 09:30:01.615190 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" event={"ID":"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061","Type":"ContainerStarted","Data":"fd18bab158ae7210ea1d1fbdf514847d6e525375313b511b71bad776f21a801e"} Mar 07 09:30:01 crc kubenswrapper[4761]: I0307 09:30:01.633648 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" podStartSLOduration=1.63362875 podStartE2EDuration="1.63362875s" podCreationTimestamp="2026-03-07 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 09:30:01.629336324 +0000 UTC m=+6058.538502819" watchObservedRunningTime="2026-03-07 09:30:01.63362875 +0000 UTC m=+6058.542795225" Mar 07 09:30:02 crc kubenswrapper[4761]: I0307 09:30:02.629456 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" event={"ID":"579abd64-02ee-47c8-b1ae-a7116434d46c","Type":"ContainerStarted","Data":"6d1d67f32df6e518e4a1ae02e09de8e2379cfc59e9bd47a981339d6d16cadb53"} Mar 07 09:30:02 crc kubenswrapper[4761]: I0307 09:30:02.631119 4761 generic.go:334] "Generic (PLEG): container finished" podID="9ccf09d4-e0bb-46da-a49a-7e0d1ee72061" containerID="1589f355dd8201cbc0e824e8ca6c8be826390e9595d7c0189bf6e0b2204f53aa" exitCode=0 Mar 07 09:30:02 crc kubenswrapper[4761]: I0307 09:30:02.631159 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" event={"ID":"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061","Type":"ContainerDied","Data":"1589f355dd8201cbc0e824e8ca6c8be826390e9595d7c0189bf6e0b2204f53aa"} Mar 07 09:30:02 crc kubenswrapper[4761]: I0307 09:30:02.661685 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" podStartSLOduration=1.509603862 podStartE2EDuration="2.661664116s" podCreationTimestamp="2026-03-07 09:30:00 +0000 UTC" firstStartedPulling="2026-03-07 09:30:01.008455302 +0000 UTC m=+6057.917621777" lastFinishedPulling="2026-03-07 09:30:02.160515546 +0000 UTC m=+6059.069682031" observedRunningTime="2026-03-07 09:30:02.647535396 +0000 UTC m=+6059.556701861" watchObservedRunningTime="2026-03-07 09:30:02.661664116 +0000 UTC m=+6059.570830591" Mar 07 09:30:03 crc kubenswrapper[4761]: I0307 09:30:03.662419 4761 generic.go:334] "Generic (PLEG): container finished" podID="579abd64-02ee-47c8-b1ae-a7116434d46c" containerID="6d1d67f32df6e518e4a1ae02e09de8e2379cfc59e9bd47a981339d6d16cadb53" exitCode=0 Mar 07 09:30:03 crc kubenswrapper[4761]: I0307 09:30:03.663440 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" event={"ID":"579abd64-02ee-47c8-b1ae-a7116434d46c","Type":"ContainerDied","Data":"6d1d67f32df6e518e4a1ae02e09de8e2379cfc59e9bd47a981339d6d16cadb53"} Mar 07 09:30:03 crc kubenswrapper[4761]: I0307 09:30:03.953612 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-hftl9_40c12f82-6c14-4659-80c5-ab38e649706a/prometheus-operator/0.log" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.134658 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.184900 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_60fad35f-402e-4c65-a097-a836c5692479/prometheus-operator-admission-webhook/0.log" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.265928 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6/prometheus-operator-admission-webhook/0.log" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.269633 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-secret-volume\") pod \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.270071 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-config-volume\") pod \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.270196 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkwr4\" (UniqueName: \"kubernetes.io/projected/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-kube-api-access-tkwr4\") pod \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.270778 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-config-volume" (OuterVolumeSpecName: "config-volume") pod "9ccf09d4-e0bb-46da-a49a-7e0d1ee72061" (UID: "9ccf09d4-e0bb-46da-a49a-7e0d1ee72061"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.275036 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9ccf09d4-e0bb-46da-a49a-7e0d1ee72061" (UID: "9ccf09d4-e0bb-46da-a49a-7e0d1ee72061"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.280563 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-kube-api-access-tkwr4" (OuterVolumeSpecName: "kube-api-access-tkwr4") pod "9ccf09d4-e0bb-46da-a49a-7e0d1ee72061" (UID: "9ccf09d4-e0bb-46da-a49a-7e0d1ee72061"). InnerVolumeSpecName "kube-api-access-tkwr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.373038 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.373080 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkwr4\" (UniqueName: \"kubernetes.io/projected/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-kube-api-access-tkwr4\") on node \"crc\" DevicePath \"\"" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.373092 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.435087 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-kfph9_b17d76c5-b5d9-4f79-841e-287d05540b40/operator/0.log" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.501113 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-bs4zz_6a8f8341-0209-4fdd-8fdd-4373ec14e18c/observability-ui-dashboards/0.log" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.643085 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-4l52t_0c90c3e5-de84-4cb1-ac22-fe02ca708196/perses-operator/0.log" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.673676 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.673674 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" event={"ID":"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061","Type":"ContainerDied","Data":"fd18bab158ae7210ea1d1fbdf514847d6e525375313b511b71bad776f21a801e"} Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.673784 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd18bab158ae7210ea1d1fbdf514847d6e525375313b511b71bad776f21a801e" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.716425 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw"] Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.735663 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw"] Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.090771 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.191009 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fhvc\" (UniqueName: \"kubernetes.io/projected/579abd64-02ee-47c8-b1ae-a7116434d46c-kube-api-access-2fhvc\") pod \"579abd64-02ee-47c8-b1ae-a7116434d46c\" (UID: \"579abd64-02ee-47c8-b1ae-a7116434d46c\") " Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.198230 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/579abd64-02ee-47c8-b1ae-a7116434d46c-kube-api-access-2fhvc" (OuterVolumeSpecName: "kube-api-access-2fhvc") pod "579abd64-02ee-47c8-b1ae-a7116434d46c" (UID: "579abd64-02ee-47c8-b1ae-a7116434d46c"). InnerVolumeSpecName "kube-api-access-2fhvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.293998 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fhvc\" (UniqueName: \"kubernetes.io/projected/579abd64-02ee-47c8-b1ae-a7116434d46c-kube-api-access-2fhvc\") on node \"crc\" DevicePath \"\"" Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.690085 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" event={"ID":"579abd64-02ee-47c8-b1ae-a7116434d46c","Type":"ContainerDied","Data":"1e60b0110fa6e1e6e03b0605d11f478811fd96a0e5bbab8ef27f76365c46c98a"} Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.690348 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e60b0110fa6e1e6e03b0605d11f478811fd96a0e5bbab8ef27f76365c46c98a" Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.690410 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.720989 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12916c4b-c46a-4104-8a61-c4ca5e3cfb96" path="/var/lib/kubelet/pods/12916c4b-c46a-4104-8a61-c4ca5e3cfb96/volumes" Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.722677 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547924-pr254"] Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.726067 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547924-pr254"] Mar 07 09:30:07 crc kubenswrapper[4761]: I0307 09:30:07.727519 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402eb779-1735-4115-a306-00df8c5240aa" path="/var/lib/kubelet/pods/402eb779-1735-4115-a306-00df8c5240aa/volumes" Mar 07 09:30:13 crc kubenswrapper[4761]: I0307 09:30:13.768744 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:30:13 crc kubenswrapper[4761]: I0307 09:30:13.769436 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:30:23 crc kubenswrapper[4761]: I0307 09:30:23.464695 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-jzcxv_e53253dc-17a2-4470-a579-410f349a1759/cluster-logging-operator/0.log" Mar 07 09:30:23 crc kubenswrapper[4761]: I0307 09:30:23.499349 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-ntd8l_9756514d-4338-4ae3-bf64-4498bb1b8f88/collector/0.log" Mar 07 09:30:23 crc kubenswrapper[4761]: I0307 09:30:23.639310 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_ed3dc6dd-e534-41c2-b652-4aa0714797a0/loki-compactor/0.log" Mar 07 09:30:23 crc kubenswrapper[4761]: I0307 09:30:23.680940 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-d62lh_6092a906-c0c5-4dcd-bb59-a9ea6a3f2745/loki-distributor/0.log" Mar 07 09:30:23 crc kubenswrapper[4761]: I0307 09:30:23.862228 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6549c956bc-b2qfh_b942b317-2819-4d06-9e2a-ed257dd6e63e/gateway/0.log" Mar 07 09:30:23 crc kubenswrapper[4761]: I0307 09:30:23.901147 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6549c956bc-b2qfh_b942b317-2819-4d06-9e2a-ed257dd6e63e/opa/0.log" Mar 07 09:30:24 crc kubenswrapper[4761]: I0307 09:30:24.061160 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6549c956bc-hqsjt_efc019b2-ac66-44ef-a1e7-cce4db209456/gateway/0.log" Mar 07 09:30:24 crc kubenswrapper[4761]: I0307 09:30:24.071220 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6549c956bc-hqsjt_efc019b2-ac66-44ef-a1e7-cce4db209456/opa/0.log" Mar 07 09:30:24 crc kubenswrapper[4761]: I0307 09:30:24.118163 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_2d390fba-d423-4b88-90b2-0b291fe8e35b/loki-index-gateway/0.log" Mar 07 09:30:24 crc kubenswrapper[4761]: I0307 09:30:24.362444 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-f9kfv_c0d9aa49-bf5e-4663-9523-a67b07e95721/loki-querier/0.log" Mar 07 09:30:24 crc kubenswrapper[4761]: I0307 09:30:24.396189 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_133e9b5e-adcc-4dd6-b762-fc29c779b70a/loki-ingester/0.log" Mar 07 09:30:24 crc kubenswrapper[4761]: I0307 09:30:24.602132 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-pvm88_22aee2b0-8c5f-486a-b74f-51b6452c7f8c/loki-query-frontend/0.log" Mar 07 09:30:26 crc kubenswrapper[4761]: I0307 09:30:26.465584 4761 scope.go:117] "RemoveContainer" containerID="8693c5f8a7641fc04ea6fca2b5174f2bc562b7f8b1848e27635f2da9f77fd7f4" Mar 07 09:30:26 crc kubenswrapper[4761]: I0307 09:30:26.516058 4761 scope.go:117] "RemoveContainer" containerID="61984e3e86b5cb7d7262d6662bcd7e8f45cbcda629a21aa027cdcdab8daa0178" Mar 07 09:30:40 crc kubenswrapper[4761]: I0307 09:30:40.619105 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-m2tp4_adfa916b-8977-446f-9387-932788e51e10/controller/1.log" Mar 07 09:30:40 crc kubenswrapper[4761]: I0307 09:30:40.784951 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-m2tp4_adfa916b-8977-446f-9387-932788e51e10/controller/0.log" Mar 07 09:30:40 crc kubenswrapper[4761]: I0307 09:30:40.802002 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-m2tp4_adfa916b-8977-446f-9387-932788e51e10/kube-rbac-proxy/0.log" Mar 07 09:30:40 crc kubenswrapper[4761]: I0307 09:30:40.850643 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-frr-files/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.026474 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-metrics/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.094371 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-frr-files/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.096535 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-reloader/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.113680 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-reloader/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.288066 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-metrics/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.329322 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-reloader/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.345190 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-frr-files/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.348249 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-metrics/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.502907 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-frr-files/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.532125 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-reloader/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.532176 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-metrics/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.565486 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/controller/1.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.696035 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/controller/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.776456 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/frr-metrics/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.958745 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/kube-rbac-proxy/0.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.002360 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/kube-rbac-proxy-frr/0.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.027542 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/frr/1.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.228655 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/reloader/0.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.245512 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-4sfgk_ffb7fdc9-854e-4990-81e1-b14fb9966476/frr-k8s-webhook-server/0.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.495342 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b98ff9599-kldnc_4c23f924-b431-4a3e-819b-713e132885f4/manager/1.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.529528 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b98ff9599-kldnc_4c23f924-b431-4a3e-819b-713e132885f4/manager/0.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.716666 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6899cc684-8cx59_3dc06a77-85c3-42a9-a972-c3f33e46df4b/webhook-server/1.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.746963 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6899cc684-8cx59_3dc06a77-85c3-42a9-a972-c3f33e46df4b/webhook-server/0.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.945278 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-75b4z_193543ae-839d-485e-a238-ae40e69f7b24/kube-rbac-proxy/0.log" Mar 07 09:30:43 crc kubenswrapper[4761]: I0307 09:30:43.155700 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-75b4z_193543ae-839d-485e-a238-ae40e69f7b24/speaker/1.log" Mar 07 09:30:43 crc kubenswrapper[4761]: I0307 09:30:43.768079 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:30:43 crc kubenswrapper[4761]: I0307 09:30:43.768163 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:30:43 crc kubenswrapper[4761]: I0307 09:30:43.768239 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 09:30:43 crc kubenswrapper[4761]: I0307 09:30:43.769333 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:30:43 crc kubenswrapper[4761]: I0307 09:30:43.769391 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" gracePeriod=600 Mar 07 09:30:43 crc kubenswrapper[4761]: I0307 09:30:43.779843 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-75b4z_193543ae-839d-485e-a238-ae40e69f7b24/speaker/0.log" Mar 07 09:30:43 crc kubenswrapper[4761]: E0307 09:30:43.902471 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:30:44 crc kubenswrapper[4761]: I0307 09:30:44.047113 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/frr/0.log" Mar 07 09:30:44 crc kubenswrapper[4761]: I0307 09:30:44.177603 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" exitCode=0 Mar 07 09:30:44 crc kubenswrapper[4761]: I0307 09:30:44.177653 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7"} Mar 07 09:30:44 crc kubenswrapper[4761]: I0307 09:30:44.177688 4761 scope.go:117] "RemoveContainer" containerID="a43ace93383b743eb2d6cd7f20bb40b06f6d768f904a91bafc3da780f93481ce" Mar 07 09:30:44 crc kubenswrapper[4761]: I0307 09:30:44.178530 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:30:44 crc kubenswrapper[4761]: E0307 09:30:44.178905 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:30:57 crc kubenswrapper[4761]: I0307 09:30:57.707846 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:30:57 crc kubenswrapper[4761]: E0307 09:30:57.708751 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:30:59 crc kubenswrapper[4761]: I0307 09:30:59.692651 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76_1d21bb59-ff27-4146-a566-a48cad049a17/util/0.log" Mar 07 09:30:59 crc kubenswrapper[4761]: I0307 09:30:59.907618 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76_1d21bb59-ff27-4146-a566-a48cad049a17/util/0.log" Mar 07 09:30:59 crc kubenswrapper[4761]: I0307 09:30:59.939030 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76_1d21bb59-ff27-4146-a566-a48cad049a17/pull/0.log" Mar 07 09:30:59 crc kubenswrapper[4761]: I0307 09:30:59.945588 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76_1d21bb59-ff27-4146-a566-a48cad049a17/pull/0.log" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.067021 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5n8vg"] Mar 07 09:31:00 crc kubenswrapper[4761]: E0307 09:31:00.067534 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579abd64-02ee-47c8-b1ae-a7116434d46c" containerName="oc" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.067554 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="579abd64-02ee-47c8-b1ae-a7116434d46c" containerName="oc" Mar 07 09:31:00 crc kubenswrapper[4761]: E0307 09:31:00.067607 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccf09d4-e0bb-46da-a49a-7e0d1ee72061" containerName="collect-profiles" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.067619 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccf09d4-e0bb-46da-a49a-7e0d1ee72061" containerName="collect-profiles" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.067917 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="579abd64-02ee-47c8-b1ae-a7116434d46c" containerName="oc" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.067948 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ccf09d4-e0bb-46da-a49a-7e0d1ee72061" containerName="collect-profiles" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.071410 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.080414 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5n8vg"] Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.132096 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76_1d21bb59-ff27-4146-a566-a48cad049a17/util/0.log" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.186993 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76_1d21bb59-ff27-4146-a566-a48cad049a17/extract/0.log" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.187404 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76_1d21bb59-ff27-4146-a566-a48cad049a17/pull/0.log" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.190265 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-catalog-content\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.190323 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6gdg\" (UniqueName: \"kubernetes.io/projected/558c3631-706b-4682-b4e8-ea50bb28b848-kube-api-access-v6gdg\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.190352 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-utilities\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.292377 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6gdg\" (UniqueName: \"kubernetes.io/projected/558c3631-706b-4682-b4e8-ea50bb28b848-kube-api-access-v6gdg\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.292442 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-utilities\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.292643 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-catalog-content\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.292997 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-utilities\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.293009 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-catalog-content\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.332149 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m_279f54bc-0f03-43b4-9b53-1952777e9b85/util/0.log" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.808552 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6gdg\" (UniqueName: \"kubernetes.io/projected/558c3631-706b-4682-b4e8-ea50bb28b848-kube-api-access-v6gdg\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.981252 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m_279f54bc-0f03-43b4-9b53-1952777e9b85/pull/0.log" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.981599 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m_279f54bc-0f03-43b4-9b53-1952777e9b85/util/0.log" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.999056 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.082138 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m_279f54bc-0f03-43b4-9b53-1952777e9b85/pull/0.log" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.303637 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m_279f54bc-0f03-43b4-9b53-1952777e9b85/pull/0.log" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.359089 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m_279f54bc-0f03-43b4-9b53-1952777e9b85/extract/0.log" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.372895 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m_279f54bc-0f03-43b4-9b53-1952777e9b85/util/0.log" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.496523 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w_9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4/util/0.log" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.535905 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5n8vg"] Mar 07 09:31:01 crc kubenswrapper[4761]: W0307 09:31:01.545060 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod558c3631_706b_4682_b4e8_ea50bb28b848.slice/crio-f0d1206d5d73a6dce9d7f262aa940890028bd238605ff2620440c20fd72828b7 WatchSource:0}: Error finding container f0d1206d5d73a6dce9d7f262aa940890028bd238605ff2620440c20fd72828b7: Status 404 returned error can't find the container with id f0d1206d5d73a6dce9d7f262aa940890028bd238605ff2620440c20fd72828b7 Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.767857 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w_9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4/util/0.log" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.819694 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w_9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4/pull/0.log" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.843877 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w_9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4/pull/0.log" Mar 07 09:31:02 crc kubenswrapper[4761]: I0307 09:31:02.048979 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w_9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4/util/0.log" Mar 07 09:31:02 crc kubenswrapper[4761]: I0307 09:31:02.097388 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w_9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4/extract/0.log" Mar 07 09:31:02 crc kubenswrapper[4761]: I0307 09:31:02.108085 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w_9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4/pull/0.log" Mar 07 09:31:02 crc kubenswrapper[4761]: I0307 09:31:02.282254 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dbw8z_de1f85b3-124d-434b-b053-4a24859497f1/extract-utilities/0.log" Mar 07 09:31:02 crc kubenswrapper[4761]: I0307 09:31:02.405747 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n8vg" event={"ID":"558c3631-706b-4682-b4e8-ea50bb28b848","Type":"ContainerDied","Data":"848c3d6c5d66222c2ba9469f59bf86a26a2f747e2d33afcefc8ea92a9468c6ec"} Mar 07 09:31:02 crc kubenswrapper[4761]: I0307 09:31:02.405799 4761 generic.go:334] "Generic (PLEG): container finished" podID="558c3631-706b-4682-b4e8-ea50bb28b848" containerID="848c3d6c5d66222c2ba9469f59bf86a26a2f747e2d33afcefc8ea92a9468c6ec" exitCode=0 Mar 07 09:31:02 crc kubenswrapper[4761]: I0307 09:31:02.406281 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n8vg" event={"ID":"558c3631-706b-4682-b4e8-ea50bb28b848","Type":"ContainerStarted","Data":"f0d1206d5d73a6dce9d7f262aa940890028bd238605ff2620440c20fd72828b7"} Mar 07 09:31:03 crc kubenswrapper[4761]: I0307 09:31:03.185963 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dbw8z_de1f85b3-124d-434b-b053-4a24859497f1/extract-utilities/0.log" Mar 07 09:31:03 crc kubenswrapper[4761]: I0307 09:31:03.186757 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dbw8z_de1f85b3-124d-434b-b053-4a24859497f1/extract-content/0.log" Mar 07 09:31:03 crc kubenswrapper[4761]: I0307 09:31:03.187956 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dbw8z_de1f85b3-124d-434b-b053-4a24859497f1/extract-content/0.log" Mar 07 09:31:03 crc kubenswrapper[4761]: I0307 09:31:03.516166 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dbw8z_de1f85b3-124d-434b-b053-4a24859497f1/extract-content/0.log" Mar 07 09:31:03 crc kubenswrapper[4761]: I0307 09:31:03.780765 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dbw8z_de1f85b3-124d-434b-b053-4a24859497f1/extract-utilities/0.log" Mar 07 09:31:03 crc kubenswrapper[4761]: I0307 09:31:03.966760 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqkkk_b9d0650f-8057-46e1-a006-f240615ce96f/extract-utilities/0.log" Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.238522 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqkkk_b9d0650f-8057-46e1-a006-f240615ce96f/extract-content/0.log" Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.326931 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqkkk_b9d0650f-8057-46e1-a006-f240615ce96f/extract-content/0.log" Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.327452 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqkkk_b9d0650f-8057-46e1-a006-f240615ce96f/extract-utilities/0.log" Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.494889 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n8vg" event={"ID":"558c3631-706b-4682-b4e8-ea50bb28b848","Type":"ContainerStarted","Data":"a9c4e77bd0a0ea72089d245d3c810439e7325cfefe95cfebd9a46ac28d5b2db3"} Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.552383 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqkkk_b9d0650f-8057-46e1-a006-f240615ce96f/extract-utilities/0.log" Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.616205 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqkkk_b9d0650f-8057-46e1-a006-f240615ce96f/extract-content/0.log" Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.802273 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dbw8z_de1f85b3-124d-434b-b053-4a24859497f1/registry-server/0.log" Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.831827 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq_e79675f7-d335-4f19-b872-22f70dccc150/util/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.111946 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq_e79675f7-d335-4f19-b872-22f70dccc150/util/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.178271 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq_e79675f7-d335-4f19-b872-22f70dccc150/pull/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.183749 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq_e79675f7-d335-4f19-b872-22f70dccc150/pull/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.421066 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq_e79675f7-d335-4f19-b872-22f70dccc150/util/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.479858 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq_e79675f7-d335-4f19-b872-22f70dccc150/pull/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.568808 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq_e79675f7-d335-4f19-b872-22f70dccc150/extract/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.645848 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr_eb6c0fb0-7486-43c4-8f84-e495d653d6fe/util/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.683908 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqkkk_b9d0650f-8057-46e1-a006-f240615ce96f/registry-server/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.860995 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr_eb6c0fb0-7486-43c4-8f84-e495d653d6fe/util/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.876127 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr_eb6c0fb0-7486-43c4-8f84-e495d653d6fe/pull/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.883896 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr_eb6c0fb0-7486-43c4-8f84-e495d653d6fe/pull/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.092882 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr_eb6c0fb0-7486-43c4-8f84-e495d653d6fe/util/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.093264 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr_eb6c0fb0-7486-43c4-8f84-e495d653d6fe/pull/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.108607 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr_eb6c0fb0-7486-43c4-8f84-e495d653d6fe/extract/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.186044 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zgvpf_2b3bce52-2720-4999-bf2f-f6808cd3a5fe/marketplace-operator/1.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.367532 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b5t8f_26b26086-7428-4218-a5c0-64eb4a9d581f/extract-utilities/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.369418 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zgvpf_2b3bce52-2720-4999-bf2f-f6808cd3a5fe/marketplace-operator/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.595509 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b5t8f_26b26086-7428-4218-a5c0-64eb4a9d581f/extract-content/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.611967 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b5t8f_26b26086-7428-4218-a5c0-64eb4a9d581f/extract-content/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.621810 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b5t8f_26b26086-7428-4218-a5c0-64eb4a9d581f/extract-utilities/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.828350 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b5t8f_26b26086-7428-4218-a5c0-64eb4a9d581f/extract-utilities/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.883342 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b5t8f_26b26086-7428-4218-a5c0-64eb4a9d581f/extract-content/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.893795 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5p7lw_dc70d269-9a38-4cf3-a494-956420600965/extract-utilities/0.log" Mar 07 09:31:07 crc kubenswrapper[4761]: I0307 09:31:07.083323 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b5t8f_26b26086-7428-4218-a5c0-64eb4a9d581f/registry-server/0.log" Mar 07 09:31:07 crc kubenswrapper[4761]: I0307 09:31:07.132293 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5p7lw_dc70d269-9a38-4cf3-a494-956420600965/extract-content/0.log" Mar 07 09:31:07 crc kubenswrapper[4761]: I0307 09:31:07.136094 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5p7lw_dc70d269-9a38-4cf3-a494-956420600965/extract-utilities/0.log" Mar 07 09:31:07 crc kubenswrapper[4761]: I0307 09:31:07.171003 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5p7lw_dc70d269-9a38-4cf3-a494-956420600965/extract-content/0.log" Mar 07 09:31:07 crc kubenswrapper[4761]: I0307 09:31:07.352964 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5p7lw_dc70d269-9a38-4cf3-a494-956420600965/extract-utilities/0.log" Mar 07 09:31:07 crc kubenswrapper[4761]: I0307 09:31:07.381233 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5p7lw_dc70d269-9a38-4cf3-a494-956420600965/extract-content/0.log" Mar 07 09:31:08 crc kubenswrapper[4761]: I0307 09:31:08.285972 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5p7lw_dc70d269-9a38-4cf3-a494-956420600965/registry-server/0.log" Mar 07 09:31:09 crc kubenswrapper[4761]: I0307 09:31:09.554879 4761 generic.go:334] "Generic (PLEG): container finished" podID="558c3631-706b-4682-b4e8-ea50bb28b848" containerID="a9c4e77bd0a0ea72089d245d3c810439e7325cfefe95cfebd9a46ac28d5b2db3" exitCode=0 Mar 07 09:31:09 crc kubenswrapper[4761]: I0307 09:31:09.555227 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n8vg" event={"ID":"558c3631-706b-4682-b4e8-ea50bb28b848","Type":"ContainerDied","Data":"a9c4e77bd0a0ea72089d245d3c810439e7325cfefe95cfebd9a46ac28d5b2db3"} Mar 07 09:31:10 crc kubenswrapper[4761]: I0307 09:31:10.572360 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n8vg" event={"ID":"558c3631-706b-4682-b4e8-ea50bb28b848","Type":"ContainerStarted","Data":"5f2caf001957e096349d2ebe16a9d81707793beb51122ac26d5a4b99f1f0ae8e"} Mar 07 09:31:10 crc kubenswrapper[4761]: I0307 09:31:10.614046 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5n8vg" podStartSLOduration=3.052331552 podStartE2EDuration="10.614023294s" podCreationTimestamp="2026-03-07 09:31:00 +0000 UTC" firstStartedPulling="2026-03-07 09:31:02.408456981 +0000 UTC m=+6119.317623456" lastFinishedPulling="2026-03-07 09:31:09.970148703 +0000 UTC m=+6126.879315198" observedRunningTime="2026-03-07 09:31:10.593499516 +0000 UTC m=+6127.502666001" watchObservedRunningTime="2026-03-07 09:31:10.614023294 +0000 UTC m=+6127.523189769" Mar 07 09:31:10 crc kubenswrapper[4761]: I0307 09:31:10.999158 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:10 crc kubenswrapper[4761]: I0307 09:31:10.999221 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:12 crc kubenswrapper[4761]: I0307 09:31:12.461982 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5n8vg" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" probeResult="failure" output=< Mar 07 09:31:12 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:31:12 crc kubenswrapper[4761]: > Mar 07 09:31:12 crc kubenswrapper[4761]: I0307 09:31:12.707159 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:31:12 crc kubenswrapper[4761]: E0307 09:31:12.707882 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:31:22 crc kubenswrapper[4761]: I0307 09:31:22.068919 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5n8vg" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" probeResult="failure" output=< Mar 07 09:31:22 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:31:22 crc kubenswrapper[4761]: > Mar 07 09:31:23 crc kubenswrapper[4761]: I0307 09:31:23.460579 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-hftl9_40c12f82-6c14-4659-80c5-ab38e649706a/prometheus-operator/0.log" Mar 07 09:31:23 crc kubenswrapper[4761]: I0307 09:31:23.462260 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_60fad35f-402e-4c65-a097-a836c5692479/prometheus-operator-admission-webhook/0.log" Mar 07 09:31:23 crc kubenswrapper[4761]: I0307 09:31:23.530107 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6/prometheus-operator-admission-webhook/0.log" Mar 07 09:31:23 crc kubenswrapper[4761]: I0307 09:31:23.667001 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-bs4zz_6a8f8341-0209-4fdd-8fdd-4373ec14e18c/observability-ui-dashboards/0.log" Mar 07 09:31:23 crc kubenswrapper[4761]: I0307 09:31:23.687986 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-kfph9_b17d76c5-b5d9-4f79-841e-287d05540b40/operator/0.log" Mar 07 09:31:23 crc kubenswrapper[4761]: I0307 09:31:23.725396 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:31:23 crc kubenswrapper[4761]: E0307 09:31:23.725826 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:31:23 crc kubenswrapper[4761]: I0307 09:31:23.733834 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-4l52t_0c90c3e5-de84-4cb1-ac22-fe02ca708196/perses-operator/0.log" Mar 07 09:31:32 crc kubenswrapper[4761]: I0307 09:31:32.048281 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5n8vg" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" probeResult="failure" output=< Mar 07 09:31:32 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:31:32 crc kubenswrapper[4761]: > Mar 07 09:31:38 crc kubenswrapper[4761]: I0307 09:31:38.175650 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4c45cc-fmrsq_8a7603da-0d59-431b-82c9-59c887e9f8d6/kube-rbac-proxy/0.log" Mar 07 09:31:38 crc kubenswrapper[4761]: I0307 09:31:38.240136 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4c45cc-fmrsq_8a7603da-0d59-431b-82c9-59c887e9f8d6/manager/1.log" Mar 07 09:31:38 crc kubenswrapper[4761]: I0307 09:31:38.338298 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4c45cc-fmrsq_8a7603da-0d59-431b-82c9-59c887e9f8d6/manager/0.log" Mar 07 09:31:38 crc kubenswrapper[4761]: I0307 09:31:38.706273 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:31:38 crc kubenswrapper[4761]: E0307 09:31:38.706689 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:31:42 crc kubenswrapper[4761]: I0307 09:31:42.058022 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5n8vg" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" probeResult="failure" output=< Mar 07 09:31:42 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:31:42 crc kubenswrapper[4761]: > Mar 07 09:31:51 crc kubenswrapper[4761]: I0307 09:31:51.064324 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:51 crc kubenswrapper[4761]: I0307 09:31:51.130442 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:51 crc kubenswrapper[4761]: I0307 09:31:51.316482 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5n8vg"] Mar 07 09:31:52 crc kubenswrapper[4761]: I0307 09:31:52.707167 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:31:52 crc kubenswrapper[4761]: E0307 09:31:52.707752 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:31:53 crc kubenswrapper[4761]: I0307 09:31:53.079552 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5n8vg" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" containerID="cri-o://5f2caf001957e096349d2ebe16a9d81707793beb51122ac26d5a4b99f1f0ae8e" gracePeriod=2 Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.100111 4761 generic.go:334] "Generic (PLEG): container finished" podID="558c3631-706b-4682-b4e8-ea50bb28b848" containerID="5f2caf001957e096349d2ebe16a9d81707793beb51122ac26d5a4b99f1f0ae8e" exitCode=0 Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.101061 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n8vg" event={"ID":"558c3631-706b-4682-b4e8-ea50bb28b848","Type":"ContainerDied","Data":"5f2caf001957e096349d2ebe16a9d81707793beb51122ac26d5a4b99f1f0ae8e"} Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.270781 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.381208 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6gdg\" (UniqueName: \"kubernetes.io/projected/558c3631-706b-4682-b4e8-ea50bb28b848-kube-api-access-v6gdg\") pod \"558c3631-706b-4682-b4e8-ea50bb28b848\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.382675 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-utilities\") pod \"558c3631-706b-4682-b4e8-ea50bb28b848\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.382787 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-catalog-content\") pod \"558c3631-706b-4682-b4e8-ea50bb28b848\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.385727 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-utilities" (OuterVolumeSpecName: "utilities") pod "558c3631-706b-4682-b4e8-ea50bb28b848" (UID: "558c3631-706b-4682-b4e8-ea50bb28b848"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.414218 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/558c3631-706b-4682-b4e8-ea50bb28b848-kube-api-access-v6gdg" (OuterVolumeSpecName: "kube-api-access-v6gdg") pod "558c3631-706b-4682-b4e8-ea50bb28b848" (UID: "558c3631-706b-4682-b4e8-ea50bb28b848"). InnerVolumeSpecName "kube-api-access-v6gdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.485371 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6gdg\" (UniqueName: \"kubernetes.io/projected/558c3631-706b-4682-b4e8-ea50bb28b848-kube-api-access-v6gdg\") on node \"crc\" DevicePath \"\"" Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.485400 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.574643 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "558c3631-706b-4682-b4e8-ea50bb28b848" (UID: "558c3631-706b-4682-b4e8-ea50bb28b848"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.587990 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.115302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n8vg" event={"ID":"558c3631-706b-4682-b4e8-ea50bb28b848","Type":"ContainerDied","Data":"f0d1206d5d73a6dce9d7f262aa940890028bd238605ff2620440c20fd72828b7"} Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.115547 4761 scope.go:117] "RemoveContainer" containerID="5f2caf001957e096349d2ebe16a9d81707793beb51122ac26d5a4b99f1f0ae8e" Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.115375 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.161387 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5n8vg"] Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.168686 4761 scope.go:117] "RemoveContainer" containerID="a9c4e77bd0a0ea72089d245d3c810439e7325cfefe95cfebd9a46ac28d5b2db3" Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.173577 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5n8vg"] Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.214852 4761 scope.go:117] "RemoveContainer" containerID="848c3d6c5d66222c2ba9469f59bf86a26a2f747e2d33afcefc8ea92a9468c6ec" Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.717642 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" path="/var/lib/kubelet/pods/558c3631-706b-4682-b4e8-ea50bb28b848/volumes" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.221789 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547932-8wtdb"] Mar 07 09:32:00 crc kubenswrapper[4761]: E0307 09:32:00.222984 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="extract-content" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.223001 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="extract-content" Mar 07 09:32:00 crc kubenswrapper[4761]: E0307 09:32:00.223032 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.223040 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" Mar 07 09:32:00 crc kubenswrapper[4761]: E0307 09:32:00.223069 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="extract-utilities" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.223076 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="extract-utilities" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.223386 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.224776 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.234409 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.234879 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547932-8wtdb"] Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.236494 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.236684 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.349843 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx4zz\" (UniqueName: \"kubernetes.io/projected/67c5d0cf-e07f-44ac-ae34-c0a8d42881b4-kube-api-access-nx4zz\") pod \"auto-csr-approver-29547932-8wtdb\" (UID: \"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4\") " pod="openshift-infra/auto-csr-approver-29547932-8wtdb" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.452125 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx4zz\" (UniqueName: \"kubernetes.io/projected/67c5d0cf-e07f-44ac-ae34-c0a8d42881b4-kube-api-access-nx4zz\") pod \"auto-csr-approver-29547932-8wtdb\" (UID: \"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4\") " pod="openshift-infra/auto-csr-approver-29547932-8wtdb" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.473818 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx4zz\" (UniqueName: \"kubernetes.io/projected/67c5d0cf-e07f-44ac-ae34-c0a8d42881b4-kube-api-access-nx4zz\") pod \"auto-csr-approver-29547932-8wtdb\" (UID: \"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4\") " pod="openshift-infra/auto-csr-approver-29547932-8wtdb" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.545204 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" Mar 07 09:32:01 crc kubenswrapper[4761]: I0307 09:32:01.080974 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547932-8wtdb"] Mar 07 09:32:01 crc kubenswrapper[4761]: I0307 09:32:01.187056 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" event={"ID":"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4","Type":"ContainerStarted","Data":"b3783d62f879a42b33dba0a81e1578e79d28ab175c67c3f4849b2de936c24e0a"} Mar 07 09:32:03 crc kubenswrapper[4761]: I0307 09:32:03.269905 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" event={"ID":"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4","Type":"ContainerStarted","Data":"019c837209f1119e836361ac64776514973bf8c3367ba3225bcf758b1ce4d9d5"} Mar 07 09:32:03 crc kubenswrapper[4761]: I0307 09:32:03.345516 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" podStartSLOduration=2.460141239 podStartE2EDuration="3.345496533s" podCreationTimestamp="2026-03-07 09:32:00 +0000 UTC" firstStartedPulling="2026-03-07 09:32:01.088841482 +0000 UTC m=+6177.998007957" lastFinishedPulling="2026-03-07 09:32:01.974196776 +0000 UTC m=+6178.883363251" observedRunningTime="2026-03-07 09:32:03.337820311 +0000 UTC m=+6180.246986786" watchObservedRunningTime="2026-03-07 09:32:03.345496533 +0000 UTC m=+6180.254663008" Mar 07 09:32:04 crc kubenswrapper[4761]: I0307 09:32:04.705294 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:32:04 crc kubenswrapper[4761]: E0307 09:32:04.705916 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:32:06 crc kubenswrapper[4761]: I0307 09:32:06.316539 4761 generic.go:334] "Generic (PLEG): container finished" podID="67c5d0cf-e07f-44ac-ae34-c0a8d42881b4" containerID="019c837209f1119e836361ac64776514973bf8c3367ba3225bcf758b1ce4d9d5" exitCode=0 Mar 07 09:32:06 crc kubenswrapper[4761]: I0307 09:32:06.316842 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" event={"ID":"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4","Type":"ContainerDied","Data":"019c837209f1119e836361ac64776514973bf8c3367ba3225bcf758b1ce4d9d5"} Mar 07 09:32:07 crc kubenswrapper[4761]: I0307 09:32:07.876326 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" Mar 07 09:32:07 crc kubenswrapper[4761]: I0307 09:32:07.974399 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx4zz\" (UniqueName: \"kubernetes.io/projected/67c5d0cf-e07f-44ac-ae34-c0a8d42881b4-kube-api-access-nx4zz\") pod \"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4\" (UID: \"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4\") " Mar 07 09:32:07 crc kubenswrapper[4761]: I0307 09:32:07.980115 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c5d0cf-e07f-44ac-ae34-c0a8d42881b4-kube-api-access-nx4zz" (OuterVolumeSpecName: "kube-api-access-nx4zz") pod "67c5d0cf-e07f-44ac-ae34-c0a8d42881b4" (UID: "67c5d0cf-e07f-44ac-ae34-c0a8d42881b4"). InnerVolumeSpecName "kube-api-access-nx4zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:32:08 crc kubenswrapper[4761]: I0307 09:32:08.079221 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx4zz\" (UniqueName: \"kubernetes.io/projected/67c5d0cf-e07f-44ac-ae34-c0a8d42881b4-kube-api-access-nx4zz\") on node \"crc\" DevicePath \"\"" Mar 07 09:32:08 crc kubenswrapper[4761]: I0307 09:32:08.365109 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" event={"ID":"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4","Type":"ContainerDied","Data":"b3783d62f879a42b33dba0a81e1578e79d28ab175c67c3f4849b2de936c24e0a"} Mar 07 09:32:08 crc kubenswrapper[4761]: I0307 09:32:08.366777 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3783d62f879a42b33dba0a81e1578e79d28ab175c67c3f4849b2de936c24e0a" Mar 07 09:32:08 crc kubenswrapper[4761]: I0307 09:32:08.365269 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" Mar 07 09:32:08 crc kubenswrapper[4761]: I0307 09:32:08.416139 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547926-mpcnk"] Mar 07 09:32:08 crc kubenswrapper[4761]: I0307 09:32:08.429045 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547926-mpcnk"] Mar 07 09:32:09 crc kubenswrapper[4761]: I0307 09:32:09.729447 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f1ce531-a112-4c72-8d81-051bccb5e911" path="/var/lib/kubelet/pods/1f1ce531-a112-4c72-8d81-051bccb5e911/volumes" Mar 07 09:32:19 crc kubenswrapper[4761]: I0307 09:32:19.705781 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:32:19 crc kubenswrapper[4761]: E0307 09:32:19.706624 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:32:26 crc kubenswrapper[4761]: I0307 09:32:26.693497 4761 scope.go:117] "RemoveContainer" containerID="068ebde8a61c2f74f529aeef190e5f95bd0742a64866071d76a4d30cec4aa5c1" Mar 07 09:32:26 crc kubenswrapper[4761]: I0307 09:32:26.758538 4761 scope.go:117] "RemoveContainer" containerID="c3859f1ed361967d75da9f67dc1dc6e93509a205363c3c250c7054b10952f11a" Mar 07 09:32:31 crc kubenswrapper[4761]: I0307 09:32:31.705766 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:32:31 crc kubenswrapper[4761]: E0307 09:32:31.707435 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:32:44 crc kubenswrapper[4761]: I0307 09:32:44.706490 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:32:44 crc kubenswrapper[4761]: E0307 09:32:44.707836 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:32:57 crc kubenswrapper[4761]: I0307 09:32:57.708573 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:32:57 crc kubenswrapper[4761]: E0307 09:32:57.709654 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:33:09 crc kubenswrapper[4761]: I0307 09:33:09.707548 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:33:09 crc kubenswrapper[4761]: E0307 09:33:09.709304 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:33:24 crc kubenswrapper[4761]: I0307 09:33:24.709158 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:33:24 crc kubenswrapper[4761]: E0307 09:33:24.710303 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:33:27 crc kubenswrapper[4761]: I0307 09:33:27.000279 4761 scope.go:117] "RemoveContainer" containerID="2df4b03d893e63c7b20aa7be594280b87a5b14b26b7ff213014869b7fe4ee9d7" Mar 07 09:33:35 crc kubenswrapper[4761]: I0307 09:33:35.706967 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:33:35 crc kubenswrapper[4761]: E0307 09:33:35.707815 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:33:39 crc kubenswrapper[4761]: I0307 09:33:39.906577 4761 generic.go:334] "Generic (PLEG): container finished" podID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerID="b4656b8ea524827ca8cf95b0f649a630118cb3e3a497912fed259248ebe052d6" exitCode=0 Mar 07 09:33:39 crc kubenswrapper[4761]: I0307 09:33:39.906708 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" event={"ID":"6e76b73c-a01e-4d4a-9574-8db8b23c3adb","Type":"ContainerDied","Data":"b4656b8ea524827ca8cf95b0f649a630118cb3e3a497912fed259248ebe052d6"} Mar 07 09:33:39 crc kubenswrapper[4761]: I0307 09:33:39.909701 4761 scope.go:117] "RemoveContainer" containerID="b4656b8ea524827ca8cf95b0f649a630118cb3e3a497912fed259248ebe052d6" Mar 07 09:33:40 crc kubenswrapper[4761]: I0307 09:33:40.859412 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ns4hc_must-gather-7h8wv_6e76b73c-a01e-4d4a-9574-8db8b23c3adb/gather/0.log" Mar 07 09:33:47 crc kubenswrapper[4761]: I0307 09:33:47.706479 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:33:47 crc kubenswrapper[4761]: E0307 09:33:47.707800 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:33:49 crc kubenswrapper[4761]: I0307 09:33:49.897836 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ns4hc/must-gather-7h8wv"] Mar 07 09:33:49 crc kubenswrapper[4761]: I0307 09:33:49.899278 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerName="copy" containerID="cri-o://a0bee0769ff56fd8f09ce4d6d57f3b219d70c968a5870f25aa904b98bfb31fb0" gracePeriod=2 Mar 07 09:33:49 crc kubenswrapper[4761]: I0307 09:33:49.921936 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ns4hc/must-gather-7h8wv"] Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.039985 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ns4hc_must-gather-7h8wv_6e76b73c-a01e-4d4a-9574-8db8b23c3adb/copy/0.log" Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.041123 4761 generic.go:334] "Generic (PLEG): container finished" podID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerID="a0bee0769ff56fd8f09ce4d6d57f3b219d70c968a5870f25aa904b98bfb31fb0" exitCode=143 Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.377028 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ns4hc_must-gather-7h8wv_6e76b73c-a01e-4d4a-9574-8db8b23c3adb/copy/0.log" Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.377610 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.530898 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mr44\" (UniqueName: \"kubernetes.io/projected/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-kube-api-access-6mr44\") pod \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.531279 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-must-gather-output\") pod \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.538913 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-kube-api-access-6mr44" (OuterVolumeSpecName: "kube-api-access-6mr44") pod "6e76b73c-a01e-4d4a-9574-8db8b23c3adb" (UID: "6e76b73c-a01e-4d4a-9574-8db8b23c3adb"). InnerVolumeSpecName "kube-api-access-6mr44". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.634258 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mr44\" (UniqueName: \"kubernetes.io/projected/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-kube-api-access-6mr44\") on node \"crc\" DevicePath \"\"" Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.718163 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6e76b73c-a01e-4d4a-9574-8db8b23c3adb" (UID: "6e76b73c-a01e-4d4a-9574-8db8b23c3adb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.736861 4761 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 07 09:33:51 crc kubenswrapper[4761]: I0307 09:33:51.052683 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ns4hc_must-gather-7h8wv_6e76b73c-a01e-4d4a-9574-8db8b23c3adb/copy/0.log" Mar 07 09:33:51 crc kubenswrapper[4761]: I0307 09:33:51.053222 4761 scope.go:117] "RemoveContainer" containerID="a0bee0769ff56fd8f09ce4d6d57f3b219d70c968a5870f25aa904b98bfb31fb0" Mar 07 09:33:51 crc kubenswrapper[4761]: I0307 09:33:51.053279 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:33:51 crc kubenswrapper[4761]: I0307 09:33:51.114247 4761 scope.go:117] "RemoveContainer" containerID="b4656b8ea524827ca8cf95b0f649a630118cb3e3a497912fed259248ebe052d6" Mar 07 09:33:51 crc kubenswrapper[4761]: I0307 09:33:51.723033 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" path="/var/lib/kubelet/pods/6e76b73c-a01e-4d4a-9574-8db8b23c3adb/volumes" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.156399 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547934-7ndnd"] Mar 07 09:34:00 crc kubenswrapper[4761]: E0307 09:34:00.157255 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c5d0cf-e07f-44ac-ae34-c0a8d42881b4" containerName="oc" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.157267 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c5d0cf-e07f-44ac-ae34-c0a8d42881b4" containerName="oc" Mar 07 09:34:00 crc kubenswrapper[4761]: E0307 09:34:00.157276 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerName="copy" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.157283 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerName="copy" Mar 07 09:34:00 crc kubenswrapper[4761]: E0307 09:34:00.157316 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerName="gather" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.157323 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerName="gather" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.157513 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c5d0cf-e07f-44ac-ae34-c0a8d42881b4" containerName="oc" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.157530 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerName="gather" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.157549 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerName="copy" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.158984 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.162138 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.167233 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.167431 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.188177 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547934-7ndnd"] Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.292824 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f279h\" (UniqueName: \"kubernetes.io/projected/8e84b7c7-9e0c-438d-b7d9-274240c287bc-kube-api-access-f279h\") pod \"auto-csr-approver-29547934-7ndnd\" (UID: \"8e84b7c7-9e0c-438d-b7d9-274240c287bc\") " pod="openshift-infra/auto-csr-approver-29547934-7ndnd" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.395745 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f279h\" (UniqueName: \"kubernetes.io/projected/8e84b7c7-9e0c-438d-b7d9-274240c287bc-kube-api-access-f279h\") pod \"auto-csr-approver-29547934-7ndnd\" (UID: \"8e84b7c7-9e0c-438d-b7d9-274240c287bc\") " pod="openshift-infra/auto-csr-approver-29547934-7ndnd" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.436156 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f279h\" (UniqueName: \"kubernetes.io/projected/8e84b7c7-9e0c-438d-b7d9-274240c287bc-kube-api-access-f279h\") pod \"auto-csr-approver-29547934-7ndnd\" (UID: \"8e84b7c7-9e0c-438d-b7d9-274240c287bc\") " pod="openshift-infra/auto-csr-approver-29547934-7ndnd" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.481024 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.997441 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547934-7ndnd"] Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.998842 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:34:01 crc kubenswrapper[4761]: I0307 09:34:01.211347 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" event={"ID":"8e84b7c7-9e0c-438d-b7d9-274240c287bc","Type":"ContainerStarted","Data":"e425732debd566e13599c3803a61ee626e3261f812387913c16fa4adb7a6efb1"} Mar 07 09:34:02 crc kubenswrapper[4761]: I0307 09:34:02.223949 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" event={"ID":"8e84b7c7-9e0c-438d-b7d9-274240c287bc","Type":"ContainerStarted","Data":"a2c196f9e90c291d95a381c29ede90541c719aaffbcb56b9ea55ea69f9baef4f"} Mar 07 09:34:02 crc kubenswrapper[4761]: I0307 09:34:02.260120 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" podStartSLOduration=1.418172313 podStartE2EDuration="2.260093172s" podCreationTimestamp="2026-03-07 09:34:00 +0000 UTC" firstStartedPulling="2026-03-07 09:34:00.997163732 +0000 UTC m=+6297.906330207" lastFinishedPulling="2026-03-07 09:34:01.839084591 +0000 UTC m=+6298.748251066" observedRunningTime="2026-03-07 09:34:02.247650922 +0000 UTC m=+6299.156817437" watchObservedRunningTime="2026-03-07 09:34:02.260093172 +0000 UTC m=+6299.169259667" Mar 07 09:34:02 crc kubenswrapper[4761]: I0307 09:34:02.705765 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:34:02 crc kubenswrapper[4761]: E0307 09:34:02.706285 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:34:03 crc kubenswrapper[4761]: I0307 09:34:03.239917 4761 generic.go:334] "Generic (PLEG): container finished" podID="8e84b7c7-9e0c-438d-b7d9-274240c287bc" containerID="a2c196f9e90c291d95a381c29ede90541c719aaffbcb56b9ea55ea69f9baef4f" exitCode=0 Mar 07 09:34:03 crc kubenswrapper[4761]: I0307 09:34:03.239973 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" event={"ID":"8e84b7c7-9e0c-438d-b7d9-274240c287bc","Type":"ContainerDied","Data":"a2c196f9e90c291d95a381c29ede90541c719aaffbcb56b9ea55ea69f9baef4f"} Mar 07 09:34:04 crc kubenswrapper[4761]: I0307 09:34:04.928455 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.001158 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f279h\" (UniqueName: \"kubernetes.io/projected/8e84b7c7-9e0c-438d-b7d9-274240c287bc-kube-api-access-f279h\") pod \"8e84b7c7-9e0c-438d-b7d9-274240c287bc\" (UID: \"8e84b7c7-9e0c-438d-b7d9-274240c287bc\") " Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.008002 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e84b7c7-9e0c-438d-b7d9-274240c287bc-kube-api-access-f279h" (OuterVolumeSpecName: "kube-api-access-f279h") pod "8e84b7c7-9e0c-438d-b7d9-274240c287bc" (UID: "8e84b7c7-9e0c-438d-b7d9-274240c287bc"). InnerVolumeSpecName "kube-api-access-f279h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.104650 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f279h\" (UniqueName: \"kubernetes.io/projected/8e84b7c7-9e0c-438d-b7d9-274240c287bc-kube-api-access-f279h\") on node \"crc\" DevicePath \"\"" Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.269656 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" event={"ID":"8e84b7c7-9e0c-438d-b7d9-274240c287bc","Type":"ContainerDied","Data":"e425732debd566e13599c3803a61ee626e3261f812387913c16fa4adb7a6efb1"} Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.270056 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e425732debd566e13599c3803a61ee626e3261f812387913c16fa4adb7a6efb1" Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.269698 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.338606 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547928-ncg8q"] Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.360236 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547928-ncg8q"] Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.720899 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="643aaaef-6add-469a-9741-96a3088eeebe" path="/var/lib/kubelet/pods/643aaaef-6add-469a-9741-96a3088eeebe/volumes" Mar 07 09:34:13 crc kubenswrapper[4761]: I0307 09:34:13.714692 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:34:13 crc kubenswrapper[4761]: E0307 09:34:13.715596 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:34:27 crc kubenswrapper[4761]: I0307 09:34:27.145815 4761 scope.go:117] "RemoveContainer" containerID="9b4b81d8423533fc4e2033c7d32b905359122fcff103ec0b3ff63b0694bbb96c" Mar 07 09:34:28 crc kubenswrapper[4761]: I0307 09:34:28.705854 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:34:28 crc kubenswrapper[4761]: E0307 09:34:28.706566 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:34:41 crc kubenswrapper[4761]: I0307 09:34:41.705899 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:34:41 crc kubenswrapper[4761]: E0307 09:34:41.706931 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.524266 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c2rjd"] Mar 07 09:34:43 crc kubenswrapper[4761]: E0307 09:34:43.525351 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e84b7c7-9e0c-438d-b7d9-274240c287bc" containerName="oc" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.525368 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e84b7c7-9e0c-438d-b7d9-274240c287bc" containerName="oc" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.525678 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e84b7c7-9e0c-438d-b7d9-274240c287bc" containerName="oc" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.551531 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2rjd"] Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.551700 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.675950 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-catalog-content\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.676019 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwnrg\" (UniqueName: \"kubernetes.io/projected/8a240795-5c49-48d5-b5b1-3771984a08e2-kube-api-access-cwnrg\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.676146 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-utilities\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.785263 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-catalog-content\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.785399 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwnrg\" (UniqueName: \"kubernetes.io/projected/8a240795-5c49-48d5-b5b1-3771984a08e2-kube-api-access-cwnrg\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.785427 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-utilities\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.786368 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-utilities\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.787016 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-catalog-content\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.818448 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwnrg\" (UniqueName: \"kubernetes.io/projected/8a240795-5c49-48d5-b5b1-3771984a08e2-kube-api-access-cwnrg\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.877472 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:46 crc kubenswrapper[4761]: I0307 09:34:46.283957 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2rjd"] Mar 07 09:34:46 crc kubenswrapper[4761]: W0307 09:34:46.289150 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a240795_5c49_48d5_b5b1_3771984a08e2.slice/crio-b5b99ec6b32e24d6d8ec16e83be57289cfeae3a21f7db5c4f4900ec022c90e45 WatchSource:0}: Error finding container b5b99ec6b32e24d6d8ec16e83be57289cfeae3a21f7db5c4f4900ec022c90e45: Status 404 returned error can't find the container with id b5b99ec6b32e24d6d8ec16e83be57289cfeae3a21f7db5c4f4900ec022c90e45 Mar 07 09:34:46 crc kubenswrapper[4761]: I0307 09:34:46.858571 4761 generic.go:334] "Generic (PLEG): container finished" podID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerID="91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7" exitCode=0 Mar 07 09:34:46 crc kubenswrapper[4761]: I0307 09:34:46.858640 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2rjd" event={"ID":"8a240795-5c49-48d5-b5b1-3771984a08e2","Type":"ContainerDied","Data":"91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7"} Mar 07 09:34:46 crc kubenswrapper[4761]: I0307 09:34:46.858949 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2rjd" event={"ID":"8a240795-5c49-48d5-b5b1-3771984a08e2","Type":"ContainerStarted","Data":"b5b99ec6b32e24d6d8ec16e83be57289cfeae3a21f7db5c4f4900ec022c90e45"} Mar 07 09:34:47 crc kubenswrapper[4761]: I0307 09:34:47.873869 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2rjd" event={"ID":"8a240795-5c49-48d5-b5b1-3771984a08e2","Type":"ContainerStarted","Data":"3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12"} Mar 07 09:34:49 crc kubenswrapper[4761]: I0307 09:34:49.912631 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2rjd" event={"ID":"8a240795-5c49-48d5-b5b1-3771984a08e2","Type":"ContainerDied","Data":"3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12"} Mar 07 09:34:49 crc kubenswrapper[4761]: I0307 09:34:49.912668 4761 generic.go:334] "Generic (PLEG): container finished" podID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerID="3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12" exitCode=0 Mar 07 09:34:50 crc kubenswrapper[4761]: I0307 09:34:50.926113 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2rjd" event={"ID":"8a240795-5c49-48d5-b5b1-3771984a08e2","Type":"ContainerStarted","Data":"d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80"} Mar 07 09:34:50 crc kubenswrapper[4761]: I0307 09:34:50.961466 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c2rjd" podStartSLOduration=4.531455855 podStartE2EDuration="7.961444009s" podCreationTimestamp="2026-03-07 09:34:43 +0000 UTC" firstStartedPulling="2026-03-07 09:34:46.861959461 +0000 UTC m=+6343.771125976" lastFinishedPulling="2026-03-07 09:34:50.291947635 +0000 UTC m=+6347.201114130" observedRunningTime="2026-03-07 09:34:50.957170432 +0000 UTC m=+6347.866336907" watchObservedRunningTime="2026-03-07 09:34:50.961444009 +0000 UTC m=+6347.870610494" Mar 07 09:34:53 crc kubenswrapper[4761]: I0307 09:34:53.878473 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:53 crc kubenswrapper[4761]: I0307 09:34:53.878737 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:54 crc kubenswrapper[4761]: I0307 09:34:54.705575 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:34:54 crc kubenswrapper[4761]: E0307 09:34:54.706077 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:34:54 crc kubenswrapper[4761]: I0307 09:34:54.938711 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-c2rjd" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="registry-server" probeResult="failure" output=< Mar 07 09:34:54 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:34:54 crc kubenswrapper[4761]: > Mar 07 09:35:03 crc kubenswrapper[4761]: I0307 09:35:03.945227 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:35:03 crc kubenswrapper[4761]: I0307 09:35:03.998046 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:35:04 crc kubenswrapper[4761]: I0307 09:35:04.192819 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2rjd"] Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.133161 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c2rjd" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="registry-server" containerID="cri-o://d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80" gracePeriod=2 Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.668455 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.706486 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:35:05 crc kubenswrapper[4761]: E0307 09:35:05.707256 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.856236 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-catalog-content\") pod \"8a240795-5c49-48d5-b5b1-3771984a08e2\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.856298 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-utilities\") pod \"8a240795-5c49-48d5-b5b1-3771984a08e2\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.856447 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwnrg\" (UniqueName: \"kubernetes.io/projected/8a240795-5c49-48d5-b5b1-3771984a08e2-kube-api-access-cwnrg\") pod \"8a240795-5c49-48d5-b5b1-3771984a08e2\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.858224 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-utilities" (OuterVolumeSpecName: "utilities") pod "8a240795-5c49-48d5-b5b1-3771984a08e2" (UID: "8a240795-5c49-48d5-b5b1-3771984a08e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.875995 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a240795-5c49-48d5-b5b1-3771984a08e2-kube-api-access-cwnrg" (OuterVolumeSpecName: "kube-api-access-cwnrg") pod "8a240795-5c49-48d5-b5b1-3771984a08e2" (UID: "8a240795-5c49-48d5-b5b1-3771984a08e2"). InnerVolumeSpecName "kube-api-access-cwnrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.906679 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a240795-5c49-48d5-b5b1-3771984a08e2" (UID: "8a240795-5c49-48d5-b5b1-3771984a08e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.960268 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwnrg\" (UniqueName: \"kubernetes.io/projected/8a240795-5c49-48d5-b5b1-3771984a08e2-kube-api-access-cwnrg\") on node \"crc\" DevicePath \"\"" Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.960321 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.960342 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.147450 4761 generic.go:334] "Generic (PLEG): container finished" podID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerID="d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80" exitCode=0 Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.147490 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2rjd" event={"ID":"8a240795-5c49-48d5-b5b1-3771984a08e2","Type":"ContainerDied","Data":"d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80"} Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.147522 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2rjd" event={"ID":"8a240795-5c49-48d5-b5b1-3771984a08e2","Type":"ContainerDied","Data":"b5b99ec6b32e24d6d8ec16e83be57289cfeae3a21f7db5c4f4900ec022c90e45"} Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.147540 4761 scope.go:117] "RemoveContainer" containerID="d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.148471 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.189004 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2rjd"] Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.192134 4761 scope.go:117] "RemoveContainer" containerID="3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.206195 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2rjd"] Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.219407 4761 scope.go:117] "RemoveContainer" containerID="91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.274230 4761 scope.go:117] "RemoveContainer" containerID="d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80" Mar 07 09:35:06 crc kubenswrapper[4761]: E0307 09:35:06.275922 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80\": container with ID starting with d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80 not found: ID does not exist" containerID="d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.275963 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80"} err="failed to get container status \"d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80\": rpc error: code = NotFound desc = could not find container \"d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80\": container with ID starting with d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80 not found: ID does not exist" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.275988 4761 scope.go:117] "RemoveContainer" containerID="3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12" Mar 07 09:35:06 crc kubenswrapper[4761]: E0307 09:35:06.276266 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12\": container with ID starting with 3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12 not found: ID does not exist" containerID="3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.276292 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12"} err="failed to get container status \"3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12\": rpc error: code = NotFound desc = could not find container \"3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12\": container with ID starting with 3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12 not found: ID does not exist" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.276308 4761 scope.go:117] "RemoveContainer" containerID="91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7" Mar 07 09:35:06 crc kubenswrapper[4761]: E0307 09:35:06.276699 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7\": container with ID starting with 91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7 not found: ID does not exist" containerID="91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.276750 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7"} err="failed to get container status \"91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7\": rpc error: code = NotFound desc = could not find container \"91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7\": container with ID starting with 91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7 not found: ID does not exist" Mar 07 09:35:07 crc kubenswrapper[4761]: I0307 09:35:07.729977 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" path="/var/lib/kubelet/pods/8a240795-5c49-48d5-b5b1-3771984a08e2/volumes" Mar 07 09:35:20 crc kubenswrapper[4761]: I0307 09:35:20.707118 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:35:20 crc kubenswrapper[4761]: E0307 09:35:20.707995 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:35:31 crc kubenswrapper[4761]: I0307 09:35:31.706532 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:35:31 crc kubenswrapper[4761]: E0307 09:35:31.707454 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:35:44 crc kubenswrapper[4761]: I0307 09:35:44.706522 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:35:45 crc kubenswrapper[4761]: I0307 09:35:45.721642 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"cf0bc973a362a4a1d1bd2cdf1a68c9366425ff5c21df78e94911ee55a1802d90"} Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.151586 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547936-87djr"] Mar 07 09:36:00 crc kubenswrapper[4761]: E0307 09:36:00.152456 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="extract-utilities" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.152469 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="extract-utilities" Mar 07 09:36:00 crc kubenswrapper[4761]: E0307 09:36:00.152484 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="extract-content" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.152490 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="extract-content" Mar 07 09:36:00 crc kubenswrapper[4761]: E0307 09:36:00.152505 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="registry-server" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.152511 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="registry-server" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.152724 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="registry-server" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.153489 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547936-87djr" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.156414 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.156515 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.157067 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.166597 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547936-87djr"] Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.209172 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr5b9\" (UniqueName: \"kubernetes.io/projected/9a431a66-89ba-47af-9e0e-e6312a8a3c98-kube-api-access-kr5b9\") pod \"auto-csr-approver-29547936-87djr\" (UID: \"9a431a66-89ba-47af-9e0e-e6312a8a3c98\") " pod="openshift-infra/auto-csr-approver-29547936-87djr" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.312784 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr5b9\" (UniqueName: \"kubernetes.io/projected/9a431a66-89ba-47af-9e0e-e6312a8a3c98-kube-api-access-kr5b9\") pod \"auto-csr-approver-29547936-87djr\" (UID: \"9a431a66-89ba-47af-9e0e-e6312a8a3c98\") " pod="openshift-infra/auto-csr-approver-29547936-87djr" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.338782 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr5b9\" (UniqueName: \"kubernetes.io/projected/9a431a66-89ba-47af-9e0e-e6312a8a3c98-kube-api-access-kr5b9\") pod \"auto-csr-approver-29547936-87djr\" (UID: \"9a431a66-89ba-47af-9e0e-e6312a8a3c98\") " pod="openshift-infra/auto-csr-approver-29547936-87djr" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.483257 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547936-87djr" Mar 07 09:36:01 crc kubenswrapper[4761]: I0307 09:36:01.023803 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547936-87djr"] Mar 07 09:36:01 crc kubenswrapper[4761]: W0307 09:36:01.027188 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a431a66_89ba_47af_9e0e_e6312a8a3c98.slice/crio-83f8b8c13df819ab5075aae2a61b4cfade8236ba37ba10082a769b39feb05327 WatchSource:0}: Error finding container 83f8b8c13df819ab5075aae2a61b4cfade8236ba37ba10082a769b39feb05327: Status 404 returned error can't find the container with id 83f8b8c13df819ab5075aae2a61b4cfade8236ba37ba10082a769b39feb05327 Mar 07 09:36:01 crc kubenswrapper[4761]: I0307 09:36:01.964275 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547936-87djr" event={"ID":"9a431a66-89ba-47af-9e0e-e6312a8a3c98","Type":"ContainerStarted","Data":"83f8b8c13df819ab5075aae2a61b4cfade8236ba37ba10082a769b39feb05327"} Mar 07 09:36:02 crc kubenswrapper[4761]: I0307 09:36:02.986315 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547936-87djr" event={"ID":"9a431a66-89ba-47af-9e0e-e6312a8a3c98","Type":"ContainerStarted","Data":"84260b639d34a4b8c4f90551cb6ce9634c110cb4105a21f1992946c368f13823"} Mar 07 09:36:03 crc kubenswrapper[4761]: I0307 09:36:03.011977 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547936-87djr" podStartSLOduration=2.172656019 podStartE2EDuration="3.011958503s" podCreationTimestamp="2026-03-07 09:36:00 +0000 UTC" firstStartedPulling="2026-03-07 09:36:01.030998977 +0000 UTC m=+6417.940165452" lastFinishedPulling="2026-03-07 09:36:01.870301451 +0000 UTC m=+6418.779467936" observedRunningTime="2026-03-07 09:36:03.008330482 +0000 UTC m=+6419.917496957" watchObservedRunningTime="2026-03-07 09:36:03.011958503 +0000 UTC m=+6419.921124978" Mar 07 09:36:05 crc kubenswrapper[4761]: I0307 09:36:05.010585 4761 generic.go:334] "Generic (PLEG): container finished" podID="9a431a66-89ba-47af-9e0e-e6312a8a3c98" containerID="84260b639d34a4b8c4f90551cb6ce9634c110cb4105a21f1992946c368f13823" exitCode=0 Mar 07 09:36:05 crc kubenswrapper[4761]: I0307 09:36:05.010691 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547936-87djr" event={"ID":"9a431a66-89ba-47af-9e0e-e6312a8a3c98","Type":"ContainerDied","Data":"84260b639d34a4b8c4f90551cb6ce9634c110cb4105a21f1992946c368f13823"} Mar 07 09:36:06 crc kubenswrapper[4761]: I0307 09:36:06.546738 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547936-87djr" Mar 07 09:36:06 crc kubenswrapper[4761]: I0307 09:36:06.721164 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr5b9\" (UniqueName: \"kubernetes.io/projected/9a431a66-89ba-47af-9e0e-e6312a8a3c98-kube-api-access-kr5b9\") pod \"9a431a66-89ba-47af-9e0e-e6312a8a3c98\" (UID: \"9a431a66-89ba-47af-9e0e-e6312a8a3c98\") " Mar 07 09:36:06 crc kubenswrapper[4761]: I0307 09:36:06.731906 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a431a66-89ba-47af-9e0e-e6312a8a3c98-kube-api-access-kr5b9" (OuterVolumeSpecName: "kube-api-access-kr5b9") pod "9a431a66-89ba-47af-9e0e-e6312a8a3c98" (UID: "9a431a66-89ba-47af-9e0e-e6312a8a3c98"). InnerVolumeSpecName "kube-api-access-kr5b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:36:06 crc kubenswrapper[4761]: I0307 09:36:06.825814 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr5b9\" (UniqueName: \"kubernetes.io/projected/9a431a66-89ba-47af-9e0e-e6312a8a3c98-kube-api-access-kr5b9\") on node \"crc\" DevicePath \"\"" Mar 07 09:36:07 crc kubenswrapper[4761]: I0307 09:36:07.041350 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547936-87djr" event={"ID":"9a431a66-89ba-47af-9e0e-e6312a8a3c98","Type":"ContainerDied","Data":"83f8b8c13df819ab5075aae2a61b4cfade8236ba37ba10082a769b39feb05327"} Mar 07 09:36:07 crc kubenswrapper[4761]: I0307 09:36:07.041745 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83f8b8c13df819ab5075aae2a61b4cfade8236ba37ba10082a769b39feb05327" Mar 07 09:36:07 crc kubenswrapper[4761]: I0307 09:36:07.041818 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547936-87djr" Mar 07 09:36:07 crc kubenswrapper[4761]: I0307 09:36:07.114095 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547930-wk7nk"] Mar 07 09:36:07 crc kubenswrapper[4761]: I0307 09:36:07.123214 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547930-wk7nk"] Mar 07 09:36:07 crc kubenswrapper[4761]: I0307 09:36:07.722596 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="579abd64-02ee-47c8-b1ae-a7116434d46c" path="/var/lib/kubelet/pods/579abd64-02ee-47c8-b1ae-a7116434d46c/volumes" Mar 07 09:36:27 crc kubenswrapper[4761]: I0307 09:36:27.337142 4761 scope.go:117] "RemoveContainer" containerID="6d1d67f32df6e518e4a1ae02e09de8e2379cfc59e9bd47a981339d6d16cadb53" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.200771 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hdxg5"] Mar 07 09:37:33 crc kubenswrapper[4761]: E0307 09:37:33.202256 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a431a66-89ba-47af-9e0e-e6312a8a3c98" containerName="oc" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.202281 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a431a66-89ba-47af-9e0e-e6312a8a3c98" containerName="oc" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.202770 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a431a66-89ba-47af-9e0e-e6312a8a3c98" containerName="oc" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.205108 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.218460 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-utilities\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.218549 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltvhx\" (UniqueName: \"kubernetes.io/projected/565d94ad-bfce-488c-833a-ed332b809bbc-kube-api-access-ltvhx\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.218703 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-catalog-content\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.219695 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hdxg5"] Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.321082 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-utilities\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.321164 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltvhx\" (UniqueName: \"kubernetes.io/projected/565d94ad-bfce-488c-833a-ed332b809bbc-kube-api-access-ltvhx\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.321319 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-catalog-content\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.321774 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-utilities\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.321791 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-catalog-content\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.345103 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltvhx\" (UniqueName: \"kubernetes.io/projected/565d94ad-bfce-488c-833a-ed332b809bbc-kube-api-access-ltvhx\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.545295 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:34 crc kubenswrapper[4761]: I0307 09:37:34.319928 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hdxg5"] Mar 07 09:37:34 crc kubenswrapper[4761]: I0307 09:37:34.773680 4761 generic.go:334] "Generic (PLEG): container finished" podID="565d94ad-bfce-488c-833a-ed332b809bbc" containerID="334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7" exitCode=0 Mar 07 09:37:34 crc kubenswrapper[4761]: I0307 09:37:34.774018 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdxg5" event={"ID":"565d94ad-bfce-488c-833a-ed332b809bbc","Type":"ContainerDied","Data":"334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7"} Mar 07 09:37:34 crc kubenswrapper[4761]: I0307 09:37:34.775012 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdxg5" event={"ID":"565d94ad-bfce-488c-833a-ed332b809bbc","Type":"ContainerStarted","Data":"a665755276d2fdc5f008d4eb639803c2f75b83db9c233a5d813ae550701c6d38"} Mar 07 09:37:35 crc kubenswrapper[4761]: I0307 09:37:35.789914 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdxg5" event={"ID":"565d94ad-bfce-488c-833a-ed332b809bbc","Type":"ContainerStarted","Data":"ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7"} Mar 07 09:37:37 crc kubenswrapper[4761]: I0307 09:37:37.816642 4761 generic.go:334] "Generic (PLEG): container finished" podID="565d94ad-bfce-488c-833a-ed332b809bbc" containerID="ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7" exitCode=0 Mar 07 09:37:37 crc kubenswrapper[4761]: I0307 09:37:37.816741 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdxg5" event={"ID":"565d94ad-bfce-488c-833a-ed332b809bbc","Type":"ContainerDied","Data":"ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7"} Mar 07 09:37:38 crc kubenswrapper[4761]: I0307 09:37:38.831334 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdxg5" event={"ID":"565d94ad-bfce-488c-833a-ed332b809bbc","Type":"ContainerStarted","Data":"d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f"} Mar 07 09:37:38 crc kubenswrapper[4761]: I0307 09:37:38.868933 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hdxg5" podStartSLOduration=2.303166857 podStartE2EDuration="5.86891502s" podCreationTimestamp="2026-03-07 09:37:33 +0000 UTC" firstStartedPulling="2026-03-07 09:37:34.775838492 +0000 UTC m=+6511.685004967" lastFinishedPulling="2026-03-07 09:37:38.341586615 +0000 UTC m=+6515.250753130" observedRunningTime="2026-03-07 09:37:38.866466359 +0000 UTC m=+6515.775632844" watchObservedRunningTime="2026-03-07 09:37:38.86891502 +0000 UTC m=+6515.778081495" Mar 07 09:37:43 crc kubenswrapper[4761]: I0307 09:37:43.546282 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:43 crc kubenswrapper[4761]: I0307 09:37:43.546981 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:43 crc kubenswrapper[4761]: I0307 09:37:43.623951 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:43 crc kubenswrapper[4761]: I0307 09:37:43.957951 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:44 crc kubenswrapper[4761]: I0307 09:37:44.012153 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hdxg5"] Mar 07 09:37:45 crc kubenswrapper[4761]: I0307 09:37:45.923998 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hdxg5" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="registry-server" containerID="cri-o://d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f" gracePeriod=2 Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.540330 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.594709 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltvhx\" (UniqueName: \"kubernetes.io/projected/565d94ad-bfce-488c-833a-ed332b809bbc-kube-api-access-ltvhx\") pod \"565d94ad-bfce-488c-833a-ed332b809bbc\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.595041 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-utilities\") pod \"565d94ad-bfce-488c-833a-ed332b809bbc\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.595120 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-catalog-content\") pod \"565d94ad-bfce-488c-833a-ed332b809bbc\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.595897 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-utilities" (OuterVolumeSpecName: "utilities") pod "565d94ad-bfce-488c-833a-ed332b809bbc" (UID: "565d94ad-bfce-488c-833a-ed332b809bbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.597828 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.607989 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565d94ad-bfce-488c-833a-ed332b809bbc-kube-api-access-ltvhx" (OuterVolumeSpecName: "kube-api-access-ltvhx") pod "565d94ad-bfce-488c-833a-ed332b809bbc" (UID: "565d94ad-bfce-488c-833a-ed332b809bbc"). InnerVolumeSpecName "kube-api-access-ltvhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.662661 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "565d94ad-bfce-488c-833a-ed332b809bbc" (UID: "565d94ad-bfce-488c-833a-ed332b809bbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.701345 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltvhx\" (UniqueName: \"kubernetes.io/projected/565d94ad-bfce-488c-833a-ed332b809bbc-kube-api-access-ltvhx\") on node \"crc\" DevicePath \"\"" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.701409 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.939602 4761 generic.go:334] "Generic (PLEG): container finished" podID="565d94ad-bfce-488c-833a-ed332b809bbc" containerID="d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f" exitCode=0 Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.939675 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdxg5" event={"ID":"565d94ad-bfce-488c-833a-ed332b809bbc","Type":"ContainerDied","Data":"d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f"} Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.939786 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdxg5" event={"ID":"565d94ad-bfce-488c-833a-ed332b809bbc","Type":"ContainerDied","Data":"a665755276d2fdc5f008d4eb639803c2f75b83db9c233a5d813ae550701c6d38"} Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.939784 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.939817 4761 scope.go:117] "RemoveContainer" containerID="d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.972503 4761 scope.go:117] "RemoveContainer" containerID="ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.993381 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hdxg5"] Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.010506 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hdxg5"] Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.027478 4761 scope.go:117] "RemoveContainer" containerID="334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7" Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.080330 4761 scope.go:117] "RemoveContainer" containerID="d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f" Mar 07 09:37:47 crc kubenswrapper[4761]: E0307 09:37:47.080856 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f\": container with ID starting with d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f not found: ID does not exist" containerID="d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f" Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.080892 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f"} err="failed to get container status \"d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f\": rpc error: code = NotFound desc = could not find container \"d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f\": container with ID starting with d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f not found: ID does not exist" Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.080917 4761 scope.go:117] "RemoveContainer" containerID="ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7" Mar 07 09:37:47 crc kubenswrapper[4761]: E0307 09:37:47.081920 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7\": container with ID starting with ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7 not found: ID does not exist" containerID="ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7" Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.081971 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7"} err="failed to get container status \"ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7\": rpc error: code = NotFound desc = could not find container \"ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7\": container with ID starting with ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7 not found: ID does not exist" Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.081996 4761 scope.go:117] "RemoveContainer" containerID="334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7" Mar 07 09:37:47 crc kubenswrapper[4761]: E0307 09:37:47.082446 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7\": container with ID starting with 334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7 not found: ID does not exist" containerID="334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7" Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.082475 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7"} err="failed to get container status \"334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7\": rpc error: code = NotFound desc = could not find container \"334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7\": container with ID starting with 334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7 not found: ID does not exist" Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.730796 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" path="/var/lib/kubelet/pods/565d94ad-bfce-488c-833a-ed332b809bbc/volumes" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.152921 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547938-t594j"] Mar 07 09:38:00 crc kubenswrapper[4761]: E0307 09:38:00.154843 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="registry-server" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.154936 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="registry-server" Mar 07 09:38:00 crc kubenswrapper[4761]: E0307 09:38:00.155029 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="extract-utilities" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.155082 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="extract-utilities" Mar 07 09:38:00 crc kubenswrapper[4761]: E0307 09:38:00.155140 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="extract-content" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.155190 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="extract-content" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.155458 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="registry-server" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.156295 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547938-t594j" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.158444 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.159431 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.160390 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.183566 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547938-t594j"] Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.300243 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5672n\" (UniqueName: \"kubernetes.io/projected/d1f69cb6-f8b0-453c-b99f-138dcc7ba27f-kube-api-access-5672n\") pod \"auto-csr-approver-29547938-t594j\" (UID: \"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f\") " pod="openshift-infra/auto-csr-approver-29547938-t594j" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.403092 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5672n\" (UniqueName: \"kubernetes.io/projected/d1f69cb6-f8b0-453c-b99f-138dcc7ba27f-kube-api-access-5672n\") pod \"auto-csr-approver-29547938-t594j\" (UID: \"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f\") " pod="openshift-infra/auto-csr-approver-29547938-t594j" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.437936 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5672n\" (UniqueName: \"kubernetes.io/projected/d1f69cb6-f8b0-453c-b99f-138dcc7ba27f-kube-api-access-5672n\") pod \"auto-csr-approver-29547938-t594j\" (UID: \"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f\") " pod="openshift-infra/auto-csr-approver-29547938-t594j" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.481703 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547938-t594j" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.994978 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547938-t594j"] Mar 07 09:38:01 crc kubenswrapper[4761]: I0307 09:38:01.147975 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547938-t594j" event={"ID":"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f","Type":"ContainerStarted","Data":"c08ac78f3cc7c4fa475828204658c8b282e20fcd48d8a2d9ce2d5428970b9d83"} Mar 07 09:38:03 crc kubenswrapper[4761]: I0307 09:38:03.172301 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547938-t594j" event={"ID":"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f","Type":"ContainerStarted","Data":"a8c94de2879dd771a735bca667983af81c0440f25f98e34fe62acce82c5b4b92"} Mar 07 09:38:04 crc kubenswrapper[4761]: I0307 09:38:04.185593 4761 generic.go:334] "Generic (PLEG): container finished" podID="d1f69cb6-f8b0-453c-b99f-138dcc7ba27f" containerID="a8c94de2879dd771a735bca667983af81c0440f25f98e34fe62acce82c5b4b92" exitCode=0 Mar 07 09:38:04 crc kubenswrapper[4761]: I0307 09:38:04.185645 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547938-t594j" event={"ID":"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f","Type":"ContainerDied","Data":"a8c94de2879dd771a735bca667983af81c0440f25f98e34fe62acce82c5b4b92"} Mar 07 09:38:05 crc kubenswrapper[4761]: I0307 09:38:05.672190 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547938-t594j" Mar 07 09:38:05 crc kubenswrapper[4761]: I0307 09:38:05.732076 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5672n\" (UniqueName: \"kubernetes.io/projected/d1f69cb6-f8b0-453c-b99f-138dcc7ba27f-kube-api-access-5672n\") pod \"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f\" (UID: \"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f\") " Mar 07 09:38:05 crc kubenswrapper[4761]: I0307 09:38:05.745006 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f69cb6-f8b0-453c-b99f-138dcc7ba27f-kube-api-access-5672n" (OuterVolumeSpecName: "kube-api-access-5672n") pod "d1f69cb6-f8b0-453c-b99f-138dcc7ba27f" (UID: "d1f69cb6-f8b0-453c-b99f-138dcc7ba27f"). InnerVolumeSpecName "kube-api-access-5672n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:38:05 crc kubenswrapper[4761]: I0307 09:38:05.835094 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5672n\" (UniqueName: \"kubernetes.io/projected/d1f69cb6-f8b0-453c-b99f-138dcc7ba27f-kube-api-access-5672n\") on node \"crc\" DevicePath \"\"" Mar 07 09:38:06 crc kubenswrapper[4761]: I0307 09:38:06.209147 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547938-t594j" event={"ID":"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f","Type":"ContainerDied","Data":"c08ac78f3cc7c4fa475828204658c8b282e20fcd48d8a2d9ce2d5428970b9d83"} Mar 07 09:38:06 crc kubenswrapper[4761]: I0307 09:38:06.209395 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c08ac78f3cc7c4fa475828204658c8b282e20fcd48d8a2d9ce2d5428970b9d83" Mar 07 09:38:06 crc kubenswrapper[4761]: I0307 09:38:06.209212 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547938-t594j" Mar 07 09:38:06 crc kubenswrapper[4761]: I0307 09:38:06.282059 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547932-8wtdb"] Mar 07 09:38:06 crc kubenswrapper[4761]: I0307 09:38:06.295036 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547932-8wtdb"] Mar 07 09:38:07 crc kubenswrapper[4761]: I0307 09:38:07.723348 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c5d0cf-e07f-44ac-ae34-c0a8d42881b4" path="/var/lib/kubelet/pods/67c5d0cf-e07f-44ac-ae34-c0a8d42881b4/volumes" Mar 07 09:38:13 crc kubenswrapper[4761]: I0307 09:38:13.768536 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:38:13 crc kubenswrapper[4761]: I0307 09:38:13.769084 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:38:27 crc kubenswrapper[4761]: I0307 09:38:27.487683 4761 scope.go:117] "RemoveContainer" containerID="019c837209f1119e836361ac64776514973bf8c3367ba3225bcf758b1ce4d9d5" Mar 07 09:38:43 crc kubenswrapper[4761]: I0307 09:38:43.768836 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:38:43 crc kubenswrapper[4761]: I0307 09:38:43.769594 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:39:13 crc kubenswrapper[4761]: I0307 09:39:13.768779 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:39:13 crc kubenswrapper[4761]: I0307 09:39:13.769265 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:39:13 crc kubenswrapper[4761]: I0307 09:39:13.769308 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 09:39:13 crc kubenswrapper[4761]: I0307 09:39:13.770211 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf0bc973a362a4a1d1bd2cdf1a68c9366425ff5c21df78e94911ee55a1802d90"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:39:13 crc kubenswrapper[4761]: I0307 09:39:13.770273 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://cf0bc973a362a4a1d1bd2cdf1a68c9366425ff5c21df78e94911ee55a1802d90" gracePeriod=600 Mar 07 09:39:14 crc kubenswrapper[4761]: I0307 09:39:14.264587 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="cf0bc973a362a4a1d1bd2cdf1a68c9366425ff5c21df78e94911ee55a1802d90" exitCode=0 Mar 07 09:39:14 crc kubenswrapper[4761]: I0307 09:39:14.264749 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"cf0bc973a362a4a1d1bd2cdf1a68c9366425ff5c21df78e94911ee55a1802d90"} Mar 07 09:39:14 crc kubenswrapper[4761]: I0307 09:39:14.265051 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"9b4b1607ab13d4928dee1283d92526d443758fafe49a3d81c8e00288d4a434d6"} Mar 07 09:39:14 crc kubenswrapper[4761]: I0307 09:39:14.265080 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.406005 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vxx25"] Mar 07 09:39:50 crc kubenswrapper[4761]: E0307 09:39:50.407221 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f69cb6-f8b0-453c-b99f-138dcc7ba27f" containerName="oc" Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.407242 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f69cb6-f8b0-453c-b99f-138dcc7ba27f" containerName="oc" Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.407626 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f69cb6-f8b0-453c-b99f-138dcc7ba27f" containerName="oc" Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.409751 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.436276 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vxx25"] Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.478947 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-catalog-content\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.479000 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkght\" (UniqueName: \"kubernetes.io/projected/b40c22b0-16ac-4673-a2b2-7de701b83d0a-kube-api-access-zkght\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.479271 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-utilities\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:51 crc kubenswrapper[4761]: I0307 09:39:51.898126 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-catalog-content\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:51 crc kubenswrapper[4761]: I0307 09:39:51.898455 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkght\" (UniqueName: \"kubernetes.io/projected/b40c22b0-16ac-4673-a2b2-7de701b83d0a-kube-api-access-zkght\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:51 crc kubenswrapper[4761]: I0307 09:39:51.898605 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-utilities\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:51 crc kubenswrapper[4761]: I0307 09:39:51.899731 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-utilities\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:51 crc kubenswrapper[4761]: I0307 09:39:51.899788 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-catalog-content\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:51 crc kubenswrapper[4761]: I0307 09:39:51.955740 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkght\" (UniqueName: \"kubernetes.io/projected/b40c22b0-16ac-4673-a2b2-7de701b83d0a-kube-api-access-zkght\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:52 crc kubenswrapper[4761]: I0307 09:39:52.245491 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:52 crc kubenswrapper[4761]: I0307 09:39:52.795139 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vxx25"] Mar 07 09:39:52 crc kubenswrapper[4761]: W0307 09:39:52.800546 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb40c22b0_16ac_4673_a2b2_7de701b83d0a.slice/crio-dcb9848778d23493260c759553049b5311b8b5242d1b18fc80881a81ebd67d48 WatchSource:0}: Error finding container dcb9848778d23493260c759553049b5311b8b5242d1b18fc80881a81ebd67d48: Status 404 returned error can't find the container with id dcb9848778d23493260c759553049b5311b8b5242d1b18fc80881a81ebd67d48 Mar 07 09:39:52 crc kubenswrapper[4761]: I0307 09:39:52.991941 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxx25" event={"ID":"b40c22b0-16ac-4673-a2b2-7de701b83d0a","Type":"ContainerStarted","Data":"dcb9848778d23493260c759553049b5311b8b5242d1b18fc80881a81ebd67d48"} Mar 07 09:39:54 crc kubenswrapper[4761]: I0307 09:39:54.013498 4761 generic.go:334] "Generic (PLEG): container finished" podID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerID="6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85" exitCode=0 Mar 07 09:39:54 crc kubenswrapper[4761]: I0307 09:39:54.013586 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxx25" event={"ID":"b40c22b0-16ac-4673-a2b2-7de701b83d0a","Type":"ContainerDied","Data":"6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85"} Mar 07 09:39:54 crc kubenswrapper[4761]: I0307 09:39:54.018697 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:39:55 crc kubenswrapper[4761]: I0307 09:39:55.029890 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxx25" event={"ID":"b40c22b0-16ac-4673-a2b2-7de701b83d0a","Type":"ContainerStarted","Data":"418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0"} Mar 07 09:39:57 crc kubenswrapper[4761]: I0307 09:39:57.080245 4761 generic.go:334] "Generic (PLEG): container finished" podID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerID="418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0" exitCode=0 Mar 07 09:39:57 crc kubenswrapper[4761]: I0307 09:39:57.080322 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxx25" event={"ID":"b40c22b0-16ac-4673-a2b2-7de701b83d0a","Type":"ContainerDied","Data":"418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0"} Mar 07 09:39:58 crc kubenswrapper[4761]: I0307 09:39:58.110024 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxx25" event={"ID":"b40c22b0-16ac-4673-a2b2-7de701b83d0a","Type":"ContainerStarted","Data":"4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86"} Mar 07 09:39:58 crc kubenswrapper[4761]: I0307 09:39:58.133503 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vxx25" podStartSLOduration=4.625700375 podStartE2EDuration="8.133476881s" podCreationTimestamp="2026-03-07 09:39:50 +0000 UTC" firstStartedPulling="2026-03-07 09:39:54.01705589 +0000 UTC m=+6650.926222375" lastFinishedPulling="2026-03-07 09:39:57.524832386 +0000 UTC m=+6654.433998881" observedRunningTime="2026-03-07 09:39:58.132796644 +0000 UTC m=+6655.041963119" watchObservedRunningTime="2026-03-07 09:39:58.133476881 +0000 UTC m=+6655.042643356" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.172996 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547940-4qghb"] Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.177117 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547940-4qghb" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.182744 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.182871 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.182747 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.187968 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547940-4qghb"] Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.343325 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbxmt\" (UniqueName: \"kubernetes.io/projected/687b5288-d27b-4cc0-8712-adf9c53fc9e0-kube-api-access-sbxmt\") pod \"auto-csr-approver-29547940-4qghb\" (UID: \"687b5288-d27b-4cc0-8712-adf9c53fc9e0\") " pod="openshift-infra/auto-csr-approver-29547940-4qghb" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.446214 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbxmt\" (UniqueName: \"kubernetes.io/projected/687b5288-d27b-4cc0-8712-adf9c53fc9e0-kube-api-access-sbxmt\") pod \"auto-csr-approver-29547940-4qghb\" (UID: \"687b5288-d27b-4cc0-8712-adf9c53fc9e0\") " pod="openshift-infra/auto-csr-approver-29547940-4qghb" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.466749 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbxmt\" (UniqueName: \"kubernetes.io/projected/687b5288-d27b-4cc0-8712-adf9c53fc9e0-kube-api-access-sbxmt\") pod \"auto-csr-approver-29547940-4qghb\" (UID: \"687b5288-d27b-4cc0-8712-adf9c53fc9e0\") " pod="openshift-infra/auto-csr-approver-29547940-4qghb" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.509387 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547940-4qghb" Mar 07 09:40:01 crc kubenswrapper[4761]: I0307 09:40:01.003169 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547940-4qghb"] Mar 07 09:40:01 crc kubenswrapper[4761]: W0307 09:40:01.008856 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod687b5288_d27b_4cc0_8712_adf9c53fc9e0.slice/crio-ce6befc68c43de27f91e080e9df7dd5a9b8dd7b93a7aae381a44a3b740feb7fc WatchSource:0}: Error finding container ce6befc68c43de27f91e080e9df7dd5a9b8dd7b93a7aae381a44a3b740feb7fc: Status 404 returned error can't find the container with id ce6befc68c43de27f91e080e9df7dd5a9b8dd7b93a7aae381a44a3b740feb7fc Mar 07 09:40:01 crc kubenswrapper[4761]: I0307 09:40:01.154426 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547940-4qghb" event={"ID":"687b5288-d27b-4cc0-8712-adf9c53fc9e0","Type":"ContainerStarted","Data":"ce6befc68c43de27f91e080e9df7dd5a9b8dd7b93a7aae381a44a3b740feb7fc"} Mar 07 09:40:02 crc kubenswrapper[4761]: I0307 09:40:02.245644 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:40:02 crc kubenswrapper[4761]: I0307 09:40:02.245872 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:40:03 crc kubenswrapper[4761]: I0307 09:40:03.334431 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-vxx25" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="registry-server" probeResult="failure" output=< Mar 07 09:40:03 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:40:03 crc kubenswrapper[4761]: > Mar 07 09:40:03 crc kubenswrapper[4761]: I0307 09:40:03.350332 4761 generic.go:334] "Generic (PLEG): container finished" podID="687b5288-d27b-4cc0-8712-adf9c53fc9e0" containerID="6b8dbbb4f41e81469408d631194e6a06e7a6e1b0e1b3f938315914e80b6dcfd9" exitCode=0 Mar 07 09:40:03 crc kubenswrapper[4761]: I0307 09:40:03.350481 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547940-4qghb" event={"ID":"687b5288-d27b-4cc0-8712-adf9c53fc9e0","Type":"ContainerDied","Data":"6b8dbbb4f41e81469408d631194e6a06e7a6e1b0e1b3f938315914e80b6dcfd9"} Mar 07 09:40:05 crc kubenswrapper[4761]: I0307 09:40:05.223952 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547940-4qghb" Mar 07 09:40:05 crc kubenswrapper[4761]: I0307 09:40:05.269121 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbxmt\" (UniqueName: \"kubernetes.io/projected/687b5288-d27b-4cc0-8712-adf9c53fc9e0-kube-api-access-sbxmt\") pod \"687b5288-d27b-4cc0-8712-adf9c53fc9e0\" (UID: \"687b5288-d27b-4cc0-8712-adf9c53fc9e0\") " Mar 07 09:40:05 crc kubenswrapper[4761]: I0307 09:40:05.282668 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/687b5288-d27b-4cc0-8712-adf9c53fc9e0-kube-api-access-sbxmt" (OuterVolumeSpecName: "kube-api-access-sbxmt") pod "687b5288-d27b-4cc0-8712-adf9c53fc9e0" (UID: "687b5288-d27b-4cc0-8712-adf9c53fc9e0"). InnerVolumeSpecName "kube-api-access-sbxmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:40:05 crc kubenswrapper[4761]: I0307 09:40:05.374437 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547940-4qghb" event={"ID":"687b5288-d27b-4cc0-8712-adf9c53fc9e0","Type":"ContainerDied","Data":"ce6befc68c43de27f91e080e9df7dd5a9b8dd7b93a7aae381a44a3b740feb7fc"} Mar 07 09:40:05 crc kubenswrapper[4761]: I0307 09:40:05.374477 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce6befc68c43de27f91e080e9df7dd5a9b8dd7b93a7aae381a44a3b740feb7fc" Mar 07 09:40:05 crc kubenswrapper[4761]: I0307 09:40:05.374489 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547940-4qghb" Mar 07 09:40:05 crc kubenswrapper[4761]: I0307 09:40:05.384854 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbxmt\" (UniqueName: \"kubernetes.io/projected/687b5288-d27b-4cc0-8712-adf9c53fc9e0-kube-api-access-sbxmt\") on node \"crc\" DevicePath \"\"" Mar 07 09:40:06 crc kubenswrapper[4761]: I0307 09:40:06.314441 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547934-7ndnd"] Mar 07 09:40:06 crc kubenswrapper[4761]: I0307 09:40:06.325324 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547934-7ndnd"] Mar 07 09:40:07 crc kubenswrapper[4761]: I0307 09:40:07.736089 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e84b7c7-9e0c-438d-b7d9-274240c287bc" path="/var/lib/kubelet/pods/8e84b7c7-9e0c-438d-b7d9-274240c287bc/volumes" Mar 07 09:40:12 crc kubenswrapper[4761]: I0307 09:40:12.313036 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:40:12 crc kubenswrapper[4761]: I0307 09:40:12.375777 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:40:12 crc kubenswrapper[4761]: I0307 09:40:12.556851 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vxx25"] Mar 07 09:40:13 crc kubenswrapper[4761]: I0307 09:40:13.482458 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vxx25" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="registry-server" containerID="cri-o://4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86" gracePeriod=2 Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.100210 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.249113 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-utilities\") pod \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.249207 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkght\" (UniqueName: \"kubernetes.io/projected/b40c22b0-16ac-4673-a2b2-7de701b83d0a-kube-api-access-zkght\") pod \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.249459 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-catalog-content\") pod \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.250390 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-utilities" (OuterVolumeSpecName: "utilities") pod "b40c22b0-16ac-4673-a2b2-7de701b83d0a" (UID: "b40c22b0-16ac-4673-a2b2-7de701b83d0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.258442 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b40c22b0-16ac-4673-a2b2-7de701b83d0a-kube-api-access-zkght" (OuterVolumeSpecName: "kube-api-access-zkght") pod "b40c22b0-16ac-4673-a2b2-7de701b83d0a" (UID: "b40c22b0-16ac-4673-a2b2-7de701b83d0a"). InnerVolumeSpecName "kube-api-access-zkght". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.336578 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b40c22b0-16ac-4673-a2b2-7de701b83d0a" (UID: "b40c22b0-16ac-4673-a2b2-7de701b83d0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.352755 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkght\" (UniqueName: \"kubernetes.io/projected/b40c22b0-16ac-4673-a2b2-7de701b83d0a-kube-api-access-zkght\") on node \"crc\" DevicePath \"\"" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.352802 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.352816 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.496418 4761 generic.go:334] "Generic (PLEG): container finished" podID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerID="4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86" exitCode=0 Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.496467 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxx25" event={"ID":"b40c22b0-16ac-4673-a2b2-7de701b83d0a","Type":"ContainerDied","Data":"4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86"} Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.496497 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxx25" event={"ID":"b40c22b0-16ac-4673-a2b2-7de701b83d0a","Type":"ContainerDied","Data":"dcb9848778d23493260c759553049b5311b8b5242d1b18fc80881a81ebd67d48"} Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.496502 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.496517 4761 scope.go:117] "RemoveContainer" containerID="4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.527516 4761 scope.go:117] "RemoveContainer" containerID="418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.529690 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vxx25"] Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.547563 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vxx25"] Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.572938 4761 scope.go:117] "RemoveContainer" containerID="6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.615325 4761 scope.go:117] "RemoveContainer" containerID="4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86" Mar 07 09:40:14 crc kubenswrapper[4761]: E0307 09:40:14.615916 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86\": container with ID starting with 4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86 not found: ID does not exist" containerID="4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.616015 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86"} err="failed to get container status \"4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86\": rpc error: code = NotFound desc = could not find container \"4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86\": container with ID starting with 4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86 not found: ID does not exist" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.616141 4761 scope.go:117] "RemoveContainer" containerID="418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0" Mar 07 09:40:14 crc kubenswrapper[4761]: E0307 09:40:14.616575 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0\": container with ID starting with 418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0 not found: ID does not exist" containerID="418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.616600 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0"} err="failed to get container status \"418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0\": rpc error: code = NotFound desc = could not find container \"418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0\": container with ID starting with 418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0 not found: ID does not exist" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.616617 4761 scope.go:117] "RemoveContainer" containerID="6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85" Mar 07 09:40:14 crc kubenswrapper[4761]: E0307 09:40:14.616878 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85\": container with ID starting with 6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85 not found: ID does not exist" containerID="6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.616901 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85"} err="failed to get container status \"6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85\": rpc error: code = NotFound desc = could not find container \"6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85\": container with ID starting with 6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85 not found: ID does not exist" Mar 07 09:40:15 crc kubenswrapper[4761]: I0307 09:40:15.752511 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" path="/var/lib/kubelet/pods/b40c22b0-16ac-4673-a2b2-7de701b83d0a/volumes" Mar 07 09:40:27 crc kubenswrapper[4761]: I0307 09:40:27.621481 4761 scope.go:117] "RemoveContainer" containerID="a2c196f9e90c291d95a381c29ede90541c719aaffbcb56b9ea55ea69f9baef4f" Mar 07 09:41:43 crc kubenswrapper[4761]: I0307 09:41:43.768630 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:41:43 crc kubenswrapper[4761]: I0307 09:41:43.770550 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.178990 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547942-qgbk7"] Mar 07 09:42:00 crc kubenswrapper[4761]: E0307 09:42:00.180867 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="extract-utilities" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.180906 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="extract-utilities" Mar 07 09:42:00 crc kubenswrapper[4761]: E0307 09:42:00.180945 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="registry-server" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.180961 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="registry-server" Mar 07 09:42:00 crc kubenswrapper[4761]: E0307 09:42:00.180999 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687b5288-d27b-4cc0-8712-adf9c53fc9e0" containerName="oc" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.181016 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="687b5288-d27b-4cc0-8712-adf9c53fc9e0" containerName="oc" Mar 07 09:42:00 crc kubenswrapper[4761]: E0307 09:42:00.181104 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="extract-content" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.181117 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="extract-content" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.181688 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="687b5288-d27b-4cc0-8712-adf9c53fc9e0" containerName="oc" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.181742 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="registry-server" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.183243 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547942-qgbk7" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.185488 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.187633 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.188285 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.206182 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547942-qgbk7"] Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.298453 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h25xl\" (UniqueName: \"kubernetes.io/projected/3b09f899-874b-4f5c-af05-fdbf24402c13-kube-api-access-h25xl\") pod \"auto-csr-approver-29547942-qgbk7\" (UID: \"3b09f899-874b-4f5c-af05-fdbf24402c13\") " pod="openshift-infra/auto-csr-approver-29547942-qgbk7" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.400816 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h25xl\" (UniqueName: \"kubernetes.io/projected/3b09f899-874b-4f5c-af05-fdbf24402c13-kube-api-access-h25xl\") pod \"auto-csr-approver-29547942-qgbk7\" (UID: \"3b09f899-874b-4f5c-af05-fdbf24402c13\") " pod="openshift-infra/auto-csr-approver-29547942-qgbk7" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.426105 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h25xl\" (UniqueName: \"kubernetes.io/projected/3b09f899-874b-4f5c-af05-fdbf24402c13-kube-api-access-h25xl\") pod \"auto-csr-approver-29547942-qgbk7\" (UID: \"3b09f899-874b-4f5c-af05-fdbf24402c13\") " pod="openshift-infra/auto-csr-approver-29547942-qgbk7" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.508192 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547942-qgbk7" Mar 07 09:42:01 crc kubenswrapper[4761]: I0307 09:42:01.053252 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547942-qgbk7"] Mar 07 09:42:02 crc kubenswrapper[4761]: I0307 09:42:02.018347 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547942-qgbk7" event={"ID":"3b09f899-874b-4f5c-af05-fdbf24402c13","Type":"ContainerStarted","Data":"3a1a3d221242d009c9e017af0f0c4013bb2768c4455a3ef933d8c11327bd161e"} Mar 07 09:42:03 crc kubenswrapper[4761]: I0307 09:42:03.040789 4761 generic.go:334] "Generic (PLEG): container finished" podID="3b09f899-874b-4f5c-af05-fdbf24402c13" containerID="39f02140db55095d87a39a171be4aaa62281bd9f8dc8f9449b13c7a5b88fc09c" exitCode=0 Mar 07 09:42:03 crc kubenswrapper[4761]: I0307 09:42:03.040857 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547942-qgbk7" event={"ID":"3b09f899-874b-4f5c-af05-fdbf24402c13","Type":"ContainerDied","Data":"39f02140db55095d87a39a171be4aaa62281bd9f8dc8f9449b13c7a5b88fc09c"} Mar 07 09:42:04 crc kubenswrapper[4761]: I0307 09:42:04.466741 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547942-qgbk7" Mar 07 09:42:04 crc kubenswrapper[4761]: I0307 09:42:04.644841 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h25xl\" (UniqueName: \"kubernetes.io/projected/3b09f899-874b-4f5c-af05-fdbf24402c13-kube-api-access-h25xl\") pod \"3b09f899-874b-4f5c-af05-fdbf24402c13\" (UID: \"3b09f899-874b-4f5c-af05-fdbf24402c13\") " Mar 07 09:42:04 crc kubenswrapper[4761]: I0307 09:42:04.651807 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b09f899-874b-4f5c-af05-fdbf24402c13-kube-api-access-h25xl" (OuterVolumeSpecName: "kube-api-access-h25xl") pod "3b09f899-874b-4f5c-af05-fdbf24402c13" (UID: "3b09f899-874b-4f5c-af05-fdbf24402c13"). InnerVolumeSpecName "kube-api-access-h25xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:42:04 crc kubenswrapper[4761]: I0307 09:42:04.748200 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h25xl\" (UniqueName: \"kubernetes.io/projected/3b09f899-874b-4f5c-af05-fdbf24402c13-kube-api-access-h25xl\") on node \"crc\" DevicePath \"\"" Mar 07 09:42:05 crc kubenswrapper[4761]: I0307 09:42:05.077768 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547942-qgbk7" event={"ID":"3b09f899-874b-4f5c-af05-fdbf24402c13","Type":"ContainerDied","Data":"3a1a3d221242d009c9e017af0f0c4013bb2768c4455a3ef933d8c11327bd161e"} Mar 07 09:42:05 crc kubenswrapper[4761]: I0307 09:42:05.077828 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a1a3d221242d009c9e017af0f0c4013bb2768c4455a3ef933d8c11327bd161e" Mar 07 09:42:05 crc kubenswrapper[4761]: I0307 09:42:05.077899 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547942-qgbk7" Mar 07 09:42:05 crc kubenswrapper[4761]: I0307 09:42:05.547898 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547936-87djr"] Mar 07 09:42:05 crc kubenswrapper[4761]: I0307 09:42:05.560312 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547936-87djr"] Mar 07 09:42:05 crc kubenswrapper[4761]: I0307 09:42:05.725188 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a431a66-89ba-47af-9e0e-e6312a8a3c98" path="/var/lib/kubelet/pods/9a431a66-89ba-47af-9e0e-e6312a8a3c98/volumes" Mar 07 09:42:13 crc kubenswrapper[4761]: I0307 09:42:13.768559 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:42:13 crc kubenswrapper[4761]: I0307 09:42:13.769552 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515152771410024450 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015152771411017366 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015152753631016515 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015152753631015465 5ustar corecore